00:00:00.001 Started by upstream project "autotest-per-patch" build number 126118 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.029 Fetching changes from the remote Git repository 00:00:00.032 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.047 Using shallow fetch with depth 1 00:00:00.047 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.047 > git --version # timeout=10 00:00:00.065 > git --version # 'git version 2.39.2' 00:00:00.065 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.082 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.082 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.211 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.234 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.254 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:02.254 > git config core.sparsecheckout # timeout=10 00:00:02.271 > git read-tree -mu HEAD # timeout=10 00:00:02.287 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:02.304 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:02.304 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:02.388 [Pipeline] Start of Pipeline 00:00:02.400 [Pipeline] library 00:00:02.401 Loading library shm_lib@master 00:00:02.401 Library shm_lib@master is cached. Copying from home. 00:00:02.418 [Pipeline] node 00:00:02.429 Running on CYP12 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.430 [Pipeline] { 00:00:02.440 [Pipeline] catchError 00:00:02.441 [Pipeline] { 00:00:02.452 [Pipeline] wrap 00:00:02.461 [Pipeline] { 00:00:02.469 [Pipeline] stage 00:00:02.471 [Pipeline] { (Prologue) 00:00:02.646 [Pipeline] sh 00:00:02.928 + logger -p user.info -t JENKINS-CI 00:00:02.949 [Pipeline] echo 00:00:02.951 Node: CYP12 00:00:02.959 [Pipeline] sh 00:00:03.262 [Pipeline] setCustomBuildProperty 00:00:03.272 [Pipeline] echo 00:00:03.274 Cleanup processes 00:00:03.280 [Pipeline] sh 00:00:03.564 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.564 2293308 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.583 [Pipeline] sh 00:00:03.865 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.865 ++ grep -v 'sudo pgrep' 00:00:03.865 ++ awk '{print $1}' 00:00:03.865 + sudo kill -9 00:00:03.865 + true 00:00:03.878 [Pipeline] cleanWs 00:00:03.886 [WS-CLEANUP] Deleting project workspace... 00:00:03.886 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.894 [WS-CLEANUP] done 00:00:03.897 [Pipeline] setCustomBuildProperty 00:00:03.907 [Pipeline] sh 00:00:04.190 + sudo git config --global --replace-all safe.directory '*' 00:00:04.277 [Pipeline] httpRequest 00:00:04.295 [Pipeline] echo 00:00:04.296 Sorcerer 10.211.164.101 is alive 00:00:04.302 [Pipeline] httpRequest 00:00:04.306 HttpMethod: GET 00:00:04.307 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:04.307 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:04.310 Response Code: HTTP/1.1 200 OK 00:00:04.310 Success: Status code 200 is in the accepted range: 200,404 00:00:04.310 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:04.594 [Pipeline] sh 00:00:04.877 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:04.892 [Pipeline] httpRequest 00:00:04.908 [Pipeline] echo 00:00:04.909 Sorcerer 10.211.164.101 is alive 00:00:04.941 [Pipeline] httpRequest 00:00:04.945 HttpMethod: GET 00:00:04.946 URL: http://10.211.164.101/packages/spdk_a49cd26ae44b3f19a6e8cd55fbeebc7693572c46.tar.gz 00:00:04.946 Sending request to url: http://10.211.164.101/packages/spdk_a49cd26ae44b3f19a6e8cd55fbeebc7693572c46.tar.gz 00:00:04.949 Response Code: HTTP/1.1 200 OK 00:00:04.949 Success: Status code 200 is in the accepted range: 200,404 00:00:04.950 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_a49cd26ae44b3f19a6e8cd55fbeebc7693572c46.tar.gz 00:00:24.603 [Pipeline] sh 00:00:24.886 + tar --no-same-owner -xf spdk_a49cd26ae44b3f19a6e8cd55fbeebc7693572c46.tar.gz 00:00:27.453 [Pipeline] sh 00:00:27.741 + git -C spdk log --oneline -n5 00:00:27.741 a49cd26ae test/accel: parametrize accel tests for DSA kernel mode 00:00:27.741 9ba518f8f test/common/autotest_common: managing idxd drivers setup 00:00:27.741 4cfe5ece8 test/setup: add configuration script for dsa devices 00:00:27.741 719d03c6a sock/uring: only register net impl if supported 00:00:27.741 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:27.753 [Pipeline] } 00:00:27.770 [Pipeline] // stage 00:00:27.778 [Pipeline] stage 00:00:27.780 [Pipeline] { (Prepare) 00:00:27.798 [Pipeline] writeFile 00:00:27.812 [Pipeline] sh 00:00:28.096 + logger -p user.info -t JENKINS-CI 00:00:28.109 [Pipeline] sh 00:00:28.394 + logger -p user.info -t JENKINS-CI 00:00:28.408 [Pipeline] sh 00:00:28.698 + cat autorun-spdk.conf 00:00:28.698 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.698 SPDK_TEST_FUZZER_SHORT=1 00:00:28.698 SPDK_TEST_FUZZER=1 00:00:28.698 SPDK_RUN_UBSAN=1 00:00:28.706 RUN_NIGHTLY=0 00:00:28.714 [Pipeline] readFile 00:00:28.786 [Pipeline] withEnv 00:00:28.788 [Pipeline] { 00:00:28.803 [Pipeline] sh 00:00:29.114 + set -ex 00:00:29.114 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:29.114 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:29.114 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.114 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:29.114 ++ SPDK_TEST_FUZZER=1 00:00:29.114 ++ SPDK_RUN_UBSAN=1 00:00:29.114 ++ RUN_NIGHTLY=0 00:00:29.114 + case $SPDK_TEST_NVMF_NICS in 00:00:29.114 + DRIVERS= 00:00:29.114 + [[ -n '' ]] 00:00:29.114 + exit 0 00:00:29.124 [Pipeline] } 00:00:29.140 [Pipeline] // withEnv 00:00:29.146 [Pipeline] } 00:00:29.166 [Pipeline] // stage 00:00:29.176 [Pipeline] catchError 00:00:29.177 [Pipeline] { 00:00:29.192 [Pipeline] timeout 00:00:29.192 Timeout set to expire in 30 min 00:00:29.193 [Pipeline] { 00:00:29.207 [Pipeline] stage 00:00:29.209 [Pipeline] { (Tests) 00:00:29.223 [Pipeline] sh 00:00:29.509 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.509 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.509 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.509 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:29.509 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:29.509 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.509 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:29.509 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.509 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:29.509 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:29.509 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:29.509 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:29.509 + source /etc/os-release 00:00:29.509 ++ NAME='Fedora Linux' 00:00:29.509 ++ VERSION='38 (Cloud Edition)' 00:00:29.509 ++ ID=fedora 00:00:29.509 ++ VERSION_ID=38 00:00:29.509 ++ VERSION_CODENAME= 00:00:29.509 ++ PLATFORM_ID=platform:f38 00:00:29.509 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.509 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.509 ++ LOGO=fedora-logo-icon 00:00:29.509 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.509 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.509 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.509 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.509 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.509 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.509 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.509 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.509 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.509 ++ SUPPORT_END=2024-05-14 00:00:29.509 ++ VARIANT='Cloud Edition' 00:00:29.509 ++ VARIANT_ID=cloud 00:00:29.509 + uname -a 00:00:29.509 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.509 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:32.811 Hugepages 00:00:32.811 node hugesize free / total 00:00:32.811 node0 1048576kB 0 / 0 00:00:32.811 node0 2048kB 0 / 0 00:00:32.811 node1 1048576kB 0 / 0 00:00:32.811 node1 2048kB 0 / 0 00:00:32.811 00:00:32.811 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.811 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:32.811 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:32.811 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:32.811 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:32.811 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:32.811 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:32.811 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:32.811 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:32.811 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:32.811 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:32.811 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:32.811 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:32.811 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:32.811 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:32.811 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:32.811 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:32.811 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:32.811 + rm -f /tmp/spdk-ld-path 00:00:32.811 + source autorun-spdk.conf 00:00:32.811 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.811 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:32.811 ++ SPDK_TEST_FUZZER=1 00:00:32.811 ++ SPDK_RUN_UBSAN=1 00:00:32.811 ++ RUN_NIGHTLY=0 00:00:32.811 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.811 + [[ -n '' ]] 00:00:32.811 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:32.811 + for M in /var/spdk/build-*-manifest.txt 00:00:32.811 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:32.811 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.811 + for M in /var/spdk/build-*-manifest.txt 00:00:32.811 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:32.811 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:32.811 ++ uname 00:00:32.811 + [[ Linux == \L\i\n\u\x ]] 00:00:32.811 + sudo dmesg -T 00:00:32.811 + sudo dmesg --clear 00:00:32.811 + dmesg_pid=2294399 00:00:32.811 + [[ Fedora Linux == FreeBSD ]] 00:00:32.811 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.811 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.811 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:32.811 + [[ -x /usr/src/fio-static/fio ]] 00:00:32.811 + export FIO_BIN=/usr/src/fio-static/fio 00:00:32.811 + FIO_BIN=/usr/src/fio-static/fio 00:00:32.811 + sudo dmesg -Tw 00:00:32.811 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:32.811 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:32.811 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:32.811 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.811 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.811 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:32.811 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.811 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.811 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:32.811 Test configuration: 00:00:32.811 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.811 SPDK_TEST_FUZZER_SHORT=1 00:00:32.811 SPDK_TEST_FUZZER=1 00:00:32.811 SPDK_RUN_UBSAN=1 00:00:32.811 RUN_NIGHTLY=0 13:26:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:32.811 13:26:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:32.811 13:26:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:32.811 13:26:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:32.811 13:26:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.811 13:26:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.811 13:26:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.811 13:26:21 -- paths/export.sh@5 -- $ export PATH 00:00:32.811 13:26:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.811 13:26:21 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:32.811 13:26:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:32.811 13:26:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720783581.XXXXXX 00:00:32.811 13:26:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720783581.qbADtU 00:00:32.811 13:26:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:32.811 13:26:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:32.811 13:26:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:32.811 13:26:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:32.811 13:26:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:32.811 13:26:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:32.811 13:26:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:32.811 13:26:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.811 13:26:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:32.811 13:26:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:32.811 13:26:21 -- pm/common@17 -- $ local monitor 00:00:32.811 13:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.811 13:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.811 13:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.811 13:26:21 -- pm/common@21 -- $ date +%s 00:00:32.811 13:26:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.811 13:26:21 -- pm/common@21 -- $ date +%s 00:00:32.811 13:26:21 -- pm/common@25 -- $ sleep 1 00:00:32.811 13:26:21 -- pm/common@21 -- $ date +%s 00:00:32.811 13:26:21 -- pm/common@21 -- $ date +%s 00:00:32.811 13:26:21 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720783581 00:00:32.811 13:26:21 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720783581 00:00:32.811 13:26:21 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720783581 00:00:32.811 13:26:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720783581 00:00:33.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720783581_collect-vmstat.pm.log 00:00:33.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720783581_collect-cpu-load.pm.log 00:00:33.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720783581_collect-cpu-temp.pm.log 00:00:33.072 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720783581_collect-bmc-pm.bmc.pm.log 00:00:34.016 13:26:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:34.016 13:26:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.016 13:26:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.016 13:26:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:34.016 13:26:22 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.016 Fri Jul 12 11:26:22 AM UTC 2024 00:00:34.016 13:26:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.016 v24.09-pre-205-ga49cd26ae 00:00:34.016 13:26:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.016 13:26:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.016 13:26:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.016 13:26:22 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:34.016 13:26:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:34.016 13:26:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.016 ************************************ 00:00:34.016 START TEST ubsan 00:00:34.016 ************************************ 00:00:34.016 13:26:22 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:34.016 using ubsan 00:00:34.016 00:00:34.016 real 0m0.001s 00:00:34.016 user 0m0.000s 00:00:34.016 sys 0m0.001s 00:00:34.016 13:26:22 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:34.016 13:26:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.016 ************************************ 00:00:34.016 END TEST ubsan 00:00:34.016 ************************************ 00:00:34.016 13:26:22 -- common/autotest_common.sh@1142 -- $ return 0 00:00:34.016 13:26:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.016 13:26:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.016 13:26:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.016 13:26:22 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:34.016 13:26:22 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:34.016 13:26:22 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:34.016 13:26:22 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:34.016 13:26:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:34.016 13:26:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.016 ************************************ 00:00:34.016 START TEST autobuild_llvm_precompile 00:00:34.016 ************************************ 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:34.016 Target: x86_64-redhat-linux-gnu 00:00:34.016 Thread model: posix 00:00:34.016 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:34.016 13:26:22 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:34.277 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:34.277 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:34.849 Using 'verbs' RDMA provider 00:00:51.142 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:03.435 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:04.007 Creating mk/config.mk...done. 00:01:04.007 Creating mk/cc.flags.mk...done. 00:01:04.007 Type 'make' to build. 00:01:04.007 00:01:04.007 real 0m29.843s 00:01:04.007 user 0m13.932s 00:01:04.007 sys 0m14.955s 00:01:04.007 13:26:52 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:04.007 13:26:52 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:04.007 ************************************ 00:01:04.007 END TEST autobuild_llvm_precompile 00:01:04.007 ************************************ 00:01:04.007 13:26:52 -- common/autotest_common.sh@1142 -- $ return 0 00:01:04.007 13:26:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:04.007 13:26:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:04.007 13:26:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:04.007 13:26:52 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:04.007 13:26:52 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:04.267 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:04.267 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:04.528 Using 'verbs' RDMA provider 00:01:18.140 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.363 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.363 Creating mk/config.mk...done. 00:01:30.363 Creating mk/cc.flags.mk...done. 00:01:30.363 Type 'make' to build. 00:01:30.363 13:27:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:30.363 13:27:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:30.363 13:27:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.363 13:27:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.363 ************************************ 00:01:30.363 START TEST make 00:01:30.363 ************************************ 00:01:30.363 13:27:18 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:30.363 make[1]: Nothing to be done for 'all'. 00:01:32.275 The Meson build system 00:01:32.275 Version: 1.3.1 00:01:32.275 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:32.275 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.275 Build type: native build 00:01:32.275 Project name: libvfio-user 00:01:32.275 Project version: 0.0.1 00:01:32.275 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:32.275 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:32.275 Host machine cpu family: x86_64 00:01:32.275 Host machine cpu: x86_64 00:01:32.275 Run-time dependency threads found: YES 00:01:32.275 Library dl found: YES 00:01:32.275 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:32.275 Run-time dependency json-c found: YES 0.17 00:01:32.275 Run-time dependency cmocka found: YES 1.1.7 00:01:32.275 Program pytest-3 found: NO 00:01:32.275 Program flake8 found: NO 00:01:32.275 Program misspell-fixer found: NO 00:01:32.275 Program restructuredtext-lint found: NO 00:01:32.275 Program valgrind found: YES (/usr/bin/valgrind) 00:01:32.275 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.275 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.275 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.275 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.276 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:32.276 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:32.276 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.276 Build targets in project: 8 00:01:32.276 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:32.276 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:32.276 00:01:32.276 libvfio-user 0.0.1 00:01:32.276 00:01:32.276 User defined options 00:01:32.276 buildtype : debug 00:01:32.276 default_library: static 00:01:32.276 libdir : /usr/local/lib 00:01:32.276 00:01:32.276 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.276 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.536 [1/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:32.536 [2/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:32.536 [3/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:32.536 [4/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:32.536 [5/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:32.536 [6/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:32.536 [7/36] Compiling C object samples/null.p/null.c.o 00:01:32.536 [8/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:32.536 [9/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:32.536 [10/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:32.536 [11/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:32.536 [12/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:32.536 [13/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:32.537 [14/36] Compiling C object samples/server.p/server.c.o 00:01:32.537 [15/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:32.537 [16/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:32.537 [17/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:32.537 [18/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:32.537 [19/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:32.537 [20/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:32.537 [21/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:32.537 [22/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:32.537 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:32.537 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:32.537 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:32.537 [26/36] Compiling C object samples/client.p/client.c.o 00:01:32.537 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:32.537 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:32.537 [29/36] Linking static target lib/libvfio-user.a 00:01:32.537 [30/36] Linking target samples/client 00:01:32.537 [31/36] Linking target test/unit_tests 00:01:32.537 [32/36] Linking target samples/gpio-pci-idio-16 00:01:32.537 [33/36] Linking target samples/null 00:01:32.537 [34/36] Linking target samples/server 00:01:32.537 [35/36] Linking target samples/shadow_ioeventfd_server 00:01:32.537 [36/36] Linking target samples/lspci 00:01:32.537 INFO: autodetecting backend as ninja 00:01:32.537 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.798 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.058 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:33.058 ninja: no work to do. 00:01:39.740 The Meson build system 00:01:39.740 Version: 1.3.1 00:01:39.740 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:39.740 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:39.740 Build type: native build 00:01:39.740 Program cat found: YES (/usr/bin/cat) 00:01:39.740 Project name: DPDK 00:01:39.740 Project version: 24.03.0 00:01:39.740 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:39.740 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:39.740 Host machine cpu family: x86_64 00:01:39.740 Host machine cpu: x86_64 00:01:39.740 Message: ## Building in Developer Mode ## 00:01:39.740 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.740 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.740 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.740 Program python3 found: YES (/usr/bin/python3) 00:01:39.740 Program cat found: YES (/usr/bin/cat) 00:01:39.741 Compiler for C supports arguments -march=native: YES 00:01:39.741 Checking for size of "void *" : 8 00:01:39.741 Checking for size of "void *" : 8 (cached) 00:01:39.741 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:39.741 Library m found: YES 00:01:39.741 Library numa found: YES 00:01:39.741 Has header "numaif.h" : YES 00:01:39.741 Library fdt found: NO 00:01:39.741 Library execinfo found: NO 00:01:39.741 Has header "execinfo.h" : YES 00:01:39.741 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.741 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.741 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.741 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.741 Run-time dependency openssl found: YES 3.0.9 00:01:39.741 Run-time dependency libpcap found: YES 1.10.4 00:01:39.741 Has header "pcap.h" with dependency libpcap: YES 00:01:39.741 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.741 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.741 Compiler for C supports arguments -Wformat: YES 00:01:39.741 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:39.741 Compiler for C supports arguments -Wformat-security: YES 00:01:39.741 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.741 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.741 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.741 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.741 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.741 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.741 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.741 Compiler for C supports arguments -Wundef: YES 00:01:39.741 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.741 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.741 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:39.741 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.741 Program objdump found: YES (/usr/bin/objdump) 00:01:39.741 Compiler for C supports arguments -mavx512f: YES 00:01:39.741 Checking if "AVX512 checking" compiles: YES 00:01:39.741 Fetching value of define "__SSE4_2__" : 1 00:01:39.741 Fetching value of define "__AES__" : 1 00:01:39.741 Fetching value of define "__AVX__" : 1 00:01:39.741 Fetching value of define "__AVX2__" : 1 00:01:39.741 Fetching value of define "__AVX512BW__" : 1 00:01:39.741 Fetching value of define "__AVX512CD__" : 1 00:01:39.741 Fetching value of define "__AVX512DQ__" : 1 00:01:39.741 Fetching value of define "__AVX512F__" : 1 00:01:39.741 Fetching value of define "__AVX512VL__" : 1 00:01:39.741 Fetching value of define "__PCLMUL__" : 1 00:01:39.741 Fetching value of define "__RDRND__" : 1 00:01:39.741 Fetching value of define "__RDSEED__" : 1 00:01:39.741 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:39.741 Fetching value of define "__znver1__" : (undefined) 00:01:39.741 Fetching value of define "__znver2__" : (undefined) 00:01:39.741 Fetching value of define "__znver3__" : (undefined) 00:01:39.741 Fetching value of define "__znver4__" : (undefined) 00:01:39.741 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:39.741 Message: lib/log: Defining dependency "log" 00:01:39.741 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.741 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.741 Checking for function "getentropy" : NO 00:01:39.741 Message: lib/eal: Defining dependency "eal" 00:01:39.741 Message: lib/ring: Defining dependency "ring" 00:01:39.741 Message: lib/rcu: Defining dependency "rcu" 00:01:39.741 Message: lib/mempool: Defining dependency "mempool" 00:01:39.741 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.741 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.741 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.741 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.741 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.741 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.741 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:39.741 Compiler for C supports arguments -mpclmul: YES 00:01:39.741 Compiler for C supports arguments -maes: YES 00:01:39.741 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.741 Compiler for C supports arguments -mavx512bw: YES 00:01:39.741 Compiler for C supports arguments -mavx512dq: YES 00:01:39.741 Compiler for C supports arguments -mavx512vl: YES 00:01:39.741 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.741 Compiler for C supports arguments -mavx2: YES 00:01:39.741 Compiler for C supports arguments -mavx: YES 00:01:39.741 Message: lib/net: Defining dependency "net" 00:01:39.741 Message: lib/meter: Defining dependency "meter" 00:01:39.741 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.741 Message: lib/pci: Defining dependency "pci" 00:01:39.741 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.741 Message: lib/hash: Defining dependency "hash" 00:01:39.741 Message: lib/timer: Defining dependency "timer" 00:01:39.741 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.741 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.741 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.741 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.741 Message: lib/power: Defining dependency "power" 00:01:39.741 Message: lib/reorder: Defining dependency "reorder" 00:01:39.741 Message: lib/security: Defining dependency "security" 00:01:39.741 Has header "linux/userfaultfd.h" : YES 00:01:39.741 Has header "linux/vduse.h" : YES 00:01:39.741 Message: lib/vhost: Defining dependency "vhost" 00:01:39.741 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:39.741 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.741 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.741 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.741 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.741 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.741 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.741 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.741 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.741 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.741 Program doxygen found: YES (/usr/bin/doxygen) 00:01:39.741 Configuring doxy-api-html.conf using configuration 00:01:39.741 Configuring doxy-api-man.conf using configuration 00:01:39.741 Program mandb found: YES (/usr/bin/mandb) 00:01:39.741 Program sphinx-build found: NO 00:01:39.741 Configuring rte_build_config.h using configuration 00:01:39.741 Message: 00:01:39.741 ================= 00:01:39.741 Applications Enabled 00:01:39.741 ================= 00:01:39.741 00:01:39.741 apps: 00:01:39.741 00:01:39.741 00:01:39.741 Message: 00:01:39.741 ================= 00:01:39.741 Libraries Enabled 00:01:39.741 ================= 00:01:39.741 00:01:39.741 libs: 00:01:39.741 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.741 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.741 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.741 00:01:39.741 Message: 00:01:39.741 =============== 00:01:39.741 Drivers Enabled 00:01:39.741 =============== 00:01:39.741 00:01:39.741 common: 00:01:39.741 00:01:39.741 bus: 00:01:39.741 pci, vdev, 00:01:39.741 mempool: 00:01:39.741 ring, 00:01:39.741 dma: 00:01:39.741 00:01:39.741 net: 00:01:39.741 00:01:39.741 crypto: 00:01:39.741 00:01:39.741 compress: 00:01:39.741 00:01:39.741 vdpa: 00:01:39.741 00:01:39.741 00:01:39.741 Message: 00:01:39.741 ================= 00:01:39.741 Content Skipped 00:01:39.741 ================= 00:01:39.741 00:01:39.741 apps: 00:01:39.741 dumpcap: explicitly disabled via build config 00:01:39.741 graph: explicitly disabled via build config 00:01:39.741 pdump: explicitly disabled via build config 00:01:39.741 proc-info: explicitly disabled via build config 00:01:39.741 test-acl: explicitly disabled via build config 00:01:39.741 test-bbdev: explicitly disabled via build config 00:01:39.741 test-cmdline: explicitly disabled via build config 00:01:39.741 test-compress-perf: explicitly disabled via build config 00:01:39.741 test-crypto-perf: explicitly disabled via build config 00:01:39.741 test-dma-perf: explicitly disabled via build config 00:01:39.741 test-eventdev: explicitly disabled via build config 00:01:39.741 test-fib: explicitly disabled via build config 00:01:39.741 test-flow-perf: explicitly disabled via build config 00:01:39.741 test-gpudev: explicitly disabled via build config 00:01:39.741 test-mldev: explicitly disabled via build config 00:01:39.741 test-pipeline: explicitly disabled via build config 00:01:39.741 test-pmd: explicitly disabled via build config 00:01:39.741 test-regex: explicitly disabled via build config 00:01:39.741 test-sad: explicitly disabled via build config 00:01:39.741 test-security-perf: explicitly disabled via build config 00:01:39.741 00:01:39.741 libs: 00:01:39.741 argparse: explicitly disabled via build config 00:01:39.741 metrics: explicitly disabled via build config 00:01:39.741 acl: explicitly disabled via build config 00:01:39.741 bbdev: explicitly disabled via build config 00:01:39.741 bitratestats: explicitly disabled via build config 00:01:39.741 bpf: explicitly disabled via build config 00:01:39.741 cfgfile: explicitly disabled via build config 00:01:39.741 distributor: explicitly disabled via build config 00:01:39.741 efd: explicitly disabled via build config 00:01:39.741 eventdev: explicitly disabled via build config 00:01:39.741 dispatcher: explicitly disabled via build config 00:01:39.741 gpudev: explicitly disabled via build config 00:01:39.741 gro: explicitly disabled via build config 00:01:39.741 gso: explicitly disabled via build config 00:01:39.741 ip_frag: explicitly disabled via build config 00:01:39.741 jobstats: explicitly disabled via build config 00:01:39.741 latencystats: explicitly disabled via build config 00:01:39.741 lpm: explicitly disabled via build config 00:01:39.741 member: explicitly disabled via build config 00:01:39.741 pcapng: explicitly disabled via build config 00:01:39.741 rawdev: explicitly disabled via build config 00:01:39.741 regexdev: explicitly disabled via build config 00:01:39.741 mldev: explicitly disabled via build config 00:01:39.741 rib: explicitly disabled via build config 00:01:39.741 sched: explicitly disabled via build config 00:01:39.741 stack: explicitly disabled via build config 00:01:39.741 ipsec: explicitly disabled via build config 00:01:39.741 pdcp: explicitly disabled via build config 00:01:39.742 fib: explicitly disabled via build config 00:01:39.742 port: explicitly disabled via build config 00:01:39.742 pdump: explicitly disabled via build config 00:01:39.742 table: explicitly disabled via build config 00:01:39.742 pipeline: explicitly disabled via build config 00:01:39.742 graph: explicitly disabled via build config 00:01:39.742 node: explicitly disabled via build config 00:01:39.742 00:01:39.742 drivers: 00:01:39.742 common/cpt: not in enabled drivers build config 00:01:39.742 common/dpaax: not in enabled drivers build config 00:01:39.742 common/iavf: not in enabled drivers build config 00:01:39.742 common/idpf: not in enabled drivers build config 00:01:39.742 common/ionic: not in enabled drivers build config 00:01:39.742 common/mvep: not in enabled drivers build config 00:01:39.742 common/octeontx: not in enabled drivers build config 00:01:39.742 bus/auxiliary: not in enabled drivers build config 00:01:39.742 bus/cdx: not in enabled drivers build config 00:01:39.742 bus/dpaa: not in enabled drivers build config 00:01:39.742 bus/fslmc: not in enabled drivers build config 00:01:39.742 bus/ifpga: not in enabled drivers build config 00:01:39.742 bus/platform: not in enabled drivers build config 00:01:39.742 bus/uacce: not in enabled drivers build config 00:01:39.742 bus/vmbus: not in enabled drivers build config 00:01:39.742 common/cnxk: not in enabled drivers build config 00:01:39.742 common/mlx5: not in enabled drivers build config 00:01:39.742 common/nfp: not in enabled drivers build config 00:01:39.742 common/nitrox: not in enabled drivers build config 00:01:39.742 common/qat: not in enabled drivers build config 00:01:39.742 common/sfc_efx: not in enabled drivers build config 00:01:39.742 mempool/bucket: not in enabled drivers build config 00:01:39.742 mempool/cnxk: not in enabled drivers build config 00:01:39.742 mempool/dpaa: not in enabled drivers build config 00:01:39.742 mempool/dpaa2: not in enabled drivers build config 00:01:39.742 mempool/octeontx: not in enabled drivers build config 00:01:39.742 mempool/stack: not in enabled drivers build config 00:01:39.742 dma/cnxk: not in enabled drivers build config 00:01:39.742 dma/dpaa: not in enabled drivers build config 00:01:39.742 dma/dpaa2: not in enabled drivers build config 00:01:39.742 dma/hisilicon: not in enabled drivers build config 00:01:39.742 dma/idxd: not in enabled drivers build config 00:01:39.742 dma/ioat: not in enabled drivers build config 00:01:39.742 dma/skeleton: not in enabled drivers build config 00:01:39.742 net/af_packet: not in enabled drivers build config 00:01:39.742 net/af_xdp: not in enabled drivers build config 00:01:39.742 net/ark: not in enabled drivers build config 00:01:39.742 net/atlantic: not in enabled drivers build config 00:01:39.742 net/avp: not in enabled drivers build config 00:01:39.742 net/axgbe: not in enabled drivers build config 00:01:39.742 net/bnx2x: not in enabled drivers build config 00:01:39.742 net/bnxt: not in enabled drivers build config 00:01:39.742 net/bonding: not in enabled drivers build config 00:01:39.742 net/cnxk: not in enabled drivers build config 00:01:39.742 net/cpfl: not in enabled drivers build config 00:01:39.742 net/cxgbe: not in enabled drivers build config 00:01:39.742 net/dpaa: not in enabled drivers build config 00:01:39.742 net/dpaa2: not in enabled drivers build config 00:01:39.742 net/e1000: not in enabled drivers build config 00:01:39.742 net/ena: not in enabled drivers build config 00:01:39.742 net/enetc: not in enabled drivers build config 00:01:39.742 net/enetfec: not in enabled drivers build config 00:01:39.742 net/enic: not in enabled drivers build config 00:01:39.742 net/failsafe: not in enabled drivers build config 00:01:39.742 net/fm10k: not in enabled drivers build config 00:01:39.742 net/gve: not in enabled drivers build config 00:01:39.742 net/hinic: not in enabled drivers build config 00:01:39.742 net/hns3: not in enabled drivers build config 00:01:39.742 net/i40e: not in enabled drivers build config 00:01:39.742 net/iavf: not in enabled drivers build config 00:01:39.742 net/ice: not in enabled drivers build config 00:01:39.742 net/idpf: not in enabled drivers build config 00:01:39.742 net/igc: not in enabled drivers build config 00:01:39.742 net/ionic: not in enabled drivers build config 00:01:39.742 net/ipn3ke: not in enabled drivers build config 00:01:39.742 net/ixgbe: not in enabled drivers build config 00:01:39.742 net/mana: not in enabled drivers build config 00:01:39.742 net/memif: not in enabled drivers build config 00:01:39.742 net/mlx4: not in enabled drivers build config 00:01:39.742 net/mlx5: not in enabled drivers build config 00:01:39.742 net/mvneta: not in enabled drivers build config 00:01:39.742 net/mvpp2: not in enabled drivers build config 00:01:39.742 net/netvsc: not in enabled drivers build config 00:01:39.742 net/nfb: not in enabled drivers build config 00:01:39.742 net/nfp: not in enabled drivers build config 00:01:39.742 net/ngbe: not in enabled drivers build config 00:01:39.742 net/null: not in enabled drivers build config 00:01:39.742 net/octeontx: not in enabled drivers build config 00:01:39.742 net/octeon_ep: not in enabled drivers build config 00:01:39.742 net/pcap: not in enabled drivers build config 00:01:39.742 net/pfe: not in enabled drivers build config 00:01:39.742 net/qede: not in enabled drivers build config 00:01:39.742 net/ring: not in enabled drivers build config 00:01:39.742 net/sfc: not in enabled drivers build config 00:01:39.742 net/softnic: not in enabled drivers build config 00:01:39.742 net/tap: not in enabled drivers build config 00:01:39.742 net/thunderx: not in enabled drivers build config 00:01:39.742 net/txgbe: not in enabled drivers build config 00:01:39.742 net/vdev_netvsc: not in enabled drivers build config 00:01:39.742 net/vhost: not in enabled drivers build config 00:01:39.742 net/virtio: not in enabled drivers build config 00:01:39.742 net/vmxnet3: not in enabled drivers build config 00:01:39.742 raw/*: missing internal dependency, "rawdev" 00:01:39.742 crypto/armv8: not in enabled drivers build config 00:01:39.742 crypto/bcmfs: not in enabled drivers build config 00:01:39.742 crypto/caam_jr: not in enabled drivers build config 00:01:39.742 crypto/ccp: not in enabled drivers build config 00:01:39.742 crypto/cnxk: not in enabled drivers build config 00:01:39.742 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.742 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.742 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.742 crypto/mlx5: not in enabled drivers build config 00:01:39.742 crypto/mvsam: not in enabled drivers build config 00:01:39.742 crypto/nitrox: not in enabled drivers build config 00:01:39.742 crypto/null: not in enabled drivers build config 00:01:39.742 crypto/octeontx: not in enabled drivers build config 00:01:39.742 crypto/openssl: not in enabled drivers build config 00:01:39.742 crypto/scheduler: not in enabled drivers build config 00:01:39.742 crypto/uadk: not in enabled drivers build config 00:01:39.742 crypto/virtio: not in enabled drivers build config 00:01:39.742 compress/isal: not in enabled drivers build config 00:01:39.742 compress/mlx5: not in enabled drivers build config 00:01:39.742 compress/nitrox: not in enabled drivers build config 00:01:39.742 compress/octeontx: not in enabled drivers build config 00:01:39.742 compress/zlib: not in enabled drivers build config 00:01:39.742 regex/*: missing internal dependency, "regexdev" 00:01:39.742 ml/*: missing internal dependency, "mldev" 00:01:39.742 vdpa/ifc: not in enabled drivers build config 00:01:39.742 vdpa/mlx5: not in enabled drivers build config 00:01:39.742 vdpa/nfp: not in enabled drivers build config 00:01:39.742 vdpa/sfc: not in enabled drivers build config 00:01:39.742 event/*: missing internal dependency, "eventdev" 00:01:39.742 baseband/*: missing internal dependency, "bbdev" 00:01:39.742 gpu/*: missing internal dependency, "gpudev" 00:01:39.742 00:01:39.742 00:01:39.742 Build targets in project: 84 00:01:39.742 00:01:39.742 DPDK 24.03.0 00:01:39.742 00:01:39.742 User defined options 00:01:39.742 buildtype : debug 00:01:39.742 default_library : static 00:01:39.742 libdir : lib 00:01:39.742 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:39.742 c_args : -fPIC -Werror 00:01:39.742 c_link_args : 00:01:39.742 cpu_instruction_set: native 00:01:39.742 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:39.742 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:39.742 enable_docs : false 00:01:39.742 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.742 enable_kmods : false 00:01:39.742 max_lcores : 128 00:01:39.742 tests : false 00:01:39.742 00:01:39.742 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.742 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:40.003 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.003 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.003 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.003 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.003 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.003 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.003 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.003 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.003 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.003 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.003 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.003 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.003 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:40.003 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:40.003 [15/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.003 [16/267] Linking static target lib/librte_log.a 00:01:40.003 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:40.003 [18/267] Linking static target lib/librte_kvargs.a 00:01:40.003 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:40.003 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:40.003 [21/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.003 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:40.003 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.003 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.003 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.003 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:40.003 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.003 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.003 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.003 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.003 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.003 [32/267] Linking static target lib/librte_pci.a 00:01:40.003 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:40.003 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.003 [35/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.003 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.003 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.003 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.261 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.261 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.261 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.261 [42/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:40.261 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.261 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.261 [45/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.261 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.261 [47/267] Linking static target lib/librte_ring.a 00:01:40.261 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.261 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.261 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.261 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.261 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.261 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.261 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.518 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.518 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.518 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.518 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.518 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.518 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.518 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.518 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.518 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.518 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.518 [65/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.518 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.518 [67/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.518 [68/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.518 [69/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.518 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.518 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.518 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.518 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.518 [74/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.518 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.518 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.518 [77/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.518 [78/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:40.518 [79/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.518 [80/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.518 [81/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.518 [82/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.518 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.518 [84/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.518 [85/267] Linking static target lib/librte_timer.a 00:01:40.518 [86/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.518 [87/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.518 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.518 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.518 [90/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.518 [91/267] Linking static target lib/librte_meter.a 00:01:40.518 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.518 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.519 [94/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.519 [95/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.519 [96/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.519 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.519 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.519 [99/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.519 [100/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.519 [101/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.519 [102/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.519 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.519 [104/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.519 [105/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.519 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.519 [107/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.519 [108/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.519 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.519 [110/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.519 [111/267] Linking static target lib/librte_telemetry.a 00:01:40.519 [112/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.519 [113/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.519 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.519 [115/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.519 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.519 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.519 [118/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.519 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.519 [120/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.519 [121/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.519 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.519 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.519 [124/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.519 [125/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.519 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.519 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.519 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.519 [129/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.519 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.519 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:40.519 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.519 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.519 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.519 [135/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.519 [136/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.519 [137/267] Linking static target lib/librte_rcu.a 00:01:40.519 [138/267] Linking target lib/librte_log.so.24.1 00:01:40.519 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.519 [140/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.519 [141/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.519 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.519 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.519 [144/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.519 [145/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.519 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.519 [147/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.519 [148/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.519 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.519 [150/267] Linking static target lib/librte_mempool.a 00:01:40.519 [151/267] Linking static target lib/librte_dmadev.a 00:01:40.519 [152/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.519 [153/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.519 [154/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.519 [155/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.779 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.779 [157/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.779 [158/267] Linking static target lib/librte_cmdline.a 00:01:40.779 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.779 [160/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.779 [161/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.779 [162/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.779 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.779 [164/267] Linking static target lib/librte_compressdev.a 00:01:40.779 [165/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.779 [166/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.779 [167/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.779 [168/267] Linking static target lib/librte_mbuf.a 00:01:40.779 [169/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.779 [170/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.779 [171/267] Linking static target lib/librte_security.a 00:01:40.779 [172/267] Linking static target lib/librte_reorder.a 00:01:40.779 [173/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.779 [174/267] Linking static target lib/librte_net.a 00:01:40.779 [175/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:40.779 [176/267] Linking static target lib/librte_eal.a 00:01:40.779 [177/267] Linking static target lib/librte_power.a 00:01:40.779 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.779 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.779 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.779 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.779 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.779 [183/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.779 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.779 [185/267] Linking static target lib/librte_cryptodev.a 00:01:40.779 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.779 [187/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.779 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.779 [189/267] Linking target lib/librte_kvargs.so.24.1 00:01:40.779 [190/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.779 [191/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.779 [192/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.779 [193/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.779 [194/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.779 [195/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.779 [196/267] Linking static target drivers/librte_mempool_ring.a 00:01:40.779 [197/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.779 [198/267] Linking static target lib/librte_hash.a 00:01:40.779 [199/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.779 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.779 [201/267] Linking static target drivers/librte_bus_vdev.a 00:01:40.779 [202/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.779 [203/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.779 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.779 [205/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.779 [206/267] Linking static target drivers/librte_bus_pci.a 00:01:41.039 [207/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.039 [208/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.039 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.039 [210/267] Linking static target lib/librte_ethdev.a 00:01:41.039 [211/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.039 [212/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.039 [213/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.300 [214/267] Linking target lib/librte_telemetry.so.24.1 00:01:41.300 [215/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.300 [216/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.301 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:41.301 [218/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:41.301 [219/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.301 [220/267] Linking static target lib/librte_vhost.a 00:01:41.301 [221/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.560 [222/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.560 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.560 [224/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.820 [225/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.820 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.820 [227/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.082 [228/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.026 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.597 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.753 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.325 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.325 [233/267] Linking target lib/librte_eal.so.24.1 00:01:52.325 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.586 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.586 [236/267] Linking target lib/librte_meter.so.24.1 00:01:52.586 [237/267] Linking target lib/librte_timer.so.24.1 00:01:52.586 [238/267] Linking target lib/librte_ring.so.24.1 00:01:52.586 [239/267] Linking target lib/librte_pci.so.24.1 00:01:52.586 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:52.586 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.586 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.586 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.586 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.586 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.586 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:52.586 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:52.586 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:52.848 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:52.848 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:52.848 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:52.848 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.109 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.109 [254/267] Linking target lib/librte_net.so.24.1 00:01:53.109 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:53.109 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:01:53.109 [257/267] Linking target lib/librte_reorder.so.24.1 00:01:53.109 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.109 [259/267] Linking target lib/librte_cmdline.so.24.1 00:01:53.109 [260/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.371 [261/267] Linking target lib/librte_ethdev.so.24.1 00:01:53.371 [262/267] Linking target lib/librte_hash.so.24.1 00:01:53.371 [263/267] Linking target lib/librte_security.so.24.1 00:01:53.371 [264/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.371 [265/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.371 [266/267] Linking target lib/librte_power.so.24.1 00:01:53.371 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:53.371 INFO: autodetecting backend as ninja 00:01:53.371 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:54.321 CC lib/ut/ut.o 00:01:54.321 CC lib/log/log.o 00:01:54.321 CC lib/log/log_flags.o 00:01:54.321 CC lib/log/log_deprecated.o 00:01:54.321 CC lib/ut_mock/mock.o 00:01:54.582 LIB libspdk_ut.a 00:01:54.582 LIB libspdk_log.a 00:01:54.582 LIB libspdk_ut_mock.a 00:01:54.842 CC lib/util/base64.o 00:01:54.842 CC lib/util/bit_array.o 00:01:54.842 CC lib/dma/dma.o 00:01:54.842 CC lib/util/cpuset.o 00:01:54.842 CC lib/ioat/ioat.o 00:01:54.842 CC lib/util/crc16.o 00:01:54.842 CXX lib/trace_parser/trace.o 00:01:54.842 CC lib/util/crc32.o 00:01:54.842 CC lib/util/crc32_ieee.o 00:01:54.842 CC lib/util/crc32c.o 00:01:54.842 CC lib/util/crc64.o 00:01:54.842 CC lib/util/dif.o 00:01:54.842 CC lib/util/fd.o 00:01:54.842 CC lib/util/file.o 00:01:54.842 CC lib/util/hexlify.o 00:01:54.842 CC lib/util/iov.o 00:01:54.842 CC lib/util/math.o 00:01:54.842 CC lib/util/pipe.o 00:01:54.842 CC lib/util/strerror_tls.o 00:01:54.842 CC lib/util/string.o 00:01:54.842 CC lib/util/uuid.o 00:01:54.842 CC lib/util/fd_group.o 00:01:54.842 CC lib/util/xor.o 00:01:54.842 CC lib/util/zipf.o 00:01:55.103 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.103 CC lib/vfio_user/host/vfio_user.o 00:01:55.103 LIB libspdk_ioat.a 00:01:55.103 LIB libspdk_dma.a 00:01:55.365 LIB libspdk_vfio_user.a 00:01:55.365 LIB libspdk_util.a 00:01:55.626 LIB libspdk_trace_parser.a 00:01:55.626 CC lib/conf/conf.o 00:01:55.626 CC lib/env_dpdk/env.o 00:01:55.626 CC lib/env_dpdk/memory.o 00:01:55.626 CC lib/rdma_utils/rdma_utils.o 00:01:55.626 CC lib/rdma_provider/common.o 00:01:55.626 CC lib/env_dpdk/pci.o 00:01:55.626 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:55.626 CC lib/env_dpdk/init.o 00:01:55.626 CC lib/json/json_parse.o 00:01:55.626 CC lib/idxd/idxd.o 00:01:55.626 CC lib/env_dpdk/threads.o 00:01:55.626 CC lib/idxd/idxd_user.o 00:01:55.626 CC lib/json/json_util.o 00:01:55.626 CC lib/json/json_write.o 00:01:55.626 CC lib/vmd/vmd.o 00:01:55.626 CC lib/env_dpdk/pci_ioat.o 00:01:55.626 CC lib/idxd/idxd_kernel.o 00:01:55.626 CC lib/vmd/led.o 00:01:55.626 CC lib/env_dpdk/pci_virtio.o 00:01:55.626 CC lib/env_dpdk/pci_vmd.o 00:01:55.626 CC lib/env_dpdk/pci_idxd.o 00:01:55.626 CC lib/env_dpdk/pci_event.o 00:01:55.626 CC lib/env_dpdk/sigbus_handler.o 00:01:55.626 CC lib/env_dpdk/pci_dpdk.o 00:01:55.626 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.626 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.888 LIB libspdk_rdma_provider.a 00:01:55.888 LIB libspdk_rdma_utils.a 00:01:55.888 LIB libspdk_conf.a 00:01:55.888 LIB libspdk_json.a 00:01:56.150 LIB libspdk_idxd.a 00:01:56.150 LIB libspdk_vmd.a 00:01:56.150 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.150 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.150 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.150 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.412 LIB libspdk_jsonrpc.a 00:01:56.673 CC lib/rpc/rpc.o 00:01:56.933 LIB libspdk_env_dpdk.a 00:01:56.933 LIB libspdk_rpc.a 00:01:57.195 CC lib/keyring/keyring.o 00:01:57.195 CC lib/keyring/keyring_rpc.o 00:01:57.195 CC lib/trace/trace_flags.o 00:01:57.195 CC lib/trace/trace.o 00:01:57.195 CC lib/notify/notify.o 00:01:57.195 CC lib/trace/trace_rpc.o 00:01:57.195 CC lib/notify/notify_rpc.o 00:01:57.457 LIB libspdk_notify.a 00:01:57.457 LIB libspdk_keyring.a 00:01:57.457 LIB libspdk_trace.a 00:01:57.719 CC lib/sock/sock.o 00:01:57.719 CC lib/sock/sock_rpc.o 00:01:57.719 CC lib/thread/thread.o 00:01:57.719 CC lib/thread/iobuf.o 00:01:57.981 LIB libspdk_sock.a 00:01:58.554 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:58.554 CC lib/nvme/nvme_ctrlr.o 00:01:58.554 CC lib/nvme/nvme_fabric.o 00:01:58.554 CC lib/nvme/nvme_ns_cmd.o 00:01:58.554 CC lib/nvme/nvme_ns.o 00:01:58.554 CC lib/nvme/nvme_pcie_common.o 00:01:58.554 CC lib/nvme/nvme_pcie.o 00:01:58.554 CC lib/nvme/nvme_qpair.o 00:01:58.554 CC lib/nvme/nvme.o 00:01:58.554 CC lib/nvme/nvme_quirks.o 00:01:58.554 CC lib/nvme/nvme_transport.o 00:01:58.554 CC lib/nvme/nvme_discovery.o 00:01:58.554 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:58.554 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:58.554 CC lib/nvme/nvme_tcp.o 00:01:58.554 CC lib/nvme/nvme_opal.o 00:01:58.554 CC lib/nvme/nvme_io_msg.o 00:01:58.554 CC lib/nvme/nvme_poll_group.o 00:01:58.554 CC lib/nvme/nvme_zns.o 00:01:58.554 CC lib/nvme/nvme_stubs.o 00:01:58.554 CC lib/nvme/nvme_auth.o 00:01:58.554 CC lib/nvme/nvme_cuse.o 00:01:58.554 CC lib/nvme/nvme_vfio_user.o 00:01:58.554 CC lib/nvme/nvme_rdma.o 00:01:58.814 LIB libspdk_thread.a 00:01:59.074 CC lib/vfu_tgt/tgt_endpoint.o 00:01:59.074 CC lib/vfu_tgt/tgt_rpc.o 00:01:59.074 CC lib/init/json_config.o 00:01:59.074 CC lib/init/subsystem_rpc.o 00:01:59.074 CC lib/init/subsystem.o 00:01:59.074 CC lib/accel/accel.o 00:01:59.074 CC lib/accel/accel_rpc.o 00:01:59.074 CC lib/init/rpc.o 00:01:59.074 CC lib/accel/accel_sw.o 00:01:59.074 CC lib/blob/blobstore.o 00:01:59.074 CC lib/blob/request.o 00:01:59.074 CC lib/blob/zeroes.o 00:01:59.074 CC lib/blob/blob_bs_dev.o 00:01:59.074 CC lib/virtio/virtio.o 00:01:59.074 CC lib/virtio/virtio_vhost_user.o 00:01:59.074 CC lib/virtio/virtio_vfio_user.o 00:01:59.074 CC lib/virtio/virtio_pci.o 00:01:59.335 LIB libspdk_init.a 00:01:59.335 LIB libspdk_vfu_tgt.a 00:01:59.335 LIB libspdk_virtio.a 00:01:59.595 CC lib/event/app.o 00:01:59.595 CC lib/event/reactor.o 00:01:59.595 CC lib/event/log_rpc.o 00:01:59.595 CC lib/event/app_rpc.o 00:01:59.595 CC lib/event/scheduler_static.o 00:01:59.856 LIB libspdk_accel.a 00:01:59.856 LIB libspdk_event.a 00:01:59.856 LIB libspdk_nvme.a 00:02:00.117 CC lib/bdev/bdev.o 00:02:00.117 CC lib/bdev/bdev_rpc.o 00:02:00.117 CC lib/bdev/part.o 00:02:00.117 CC lib/bdev/bdev_zone.o 00:02:00.117 CC lib/bdev/scsi_nvme.o 00:02:01.058 LIB libspdk_blob.a 00:02:01.319 CC lib/blobfs/blobfs.o 00:02:01.319 CC lib/lvol/lvol.o 00:02:01.319 CC lib/blobfs/tree.o 00:02:01.579 LIB libspdk_bdev.a 00:02:01.838 CC lib/scsi/dev.o 00:02:01.838 CC lib/scsi/lun.o 00:02:01.838 CC lib/scsi/port.o 00:02:01.838 CC lib/scsi/scsi.o 00:02:01.838 CC lib/scsi/scsi_bdev.o 00:02:01.838 CC lib/scsi/scsi_pr.o 00:02:01.838 CC lib/scsi/scsi_rpc.o 00:02:01.838 CC lib/scsi/task.o 00:02:01.838 CC lib/nbd/nbd.o 00:02:01.838 CC lib/ublk/ublk.o 00:02:01.838 CC lib/nbd/nbd_rpc.o 00:02:01.838 CC lib/ublk/ublk_rpc.o 00:02:01.838 CC lib/ftl/ftl_core.o 00:02:01.838 CC lib/ftl/ftl_layout.o 00:02:01.838 CC lib/nvmf/ctrlr.o 00:02:01.838 CC lib/ftl/ftl_init.o 00:02:01.838 CC lib/nvmf/ctrlr_discovery.o 00:02:01.838 CC lib/nvmf/ctrlr_bdev.o 00:02:01.838 CC lib/ftl/ftl_debug.o 00:02:01.838 CC lib/ftl/ftl_io.o 00:02:01.838 CC lib/nvmf/subsystem.o 00:02:01.838 CC lib/ftl/ftl_sb.o 00:02:01.838 CC lib/nvmf/nvmf.o 00:02:01.838 CC lib/ftl/ftl_l2p.o 00:02:01.838 CC lib/nvmf/nvmf_rpc.o 00:02:01.838 CC lib/ftl/ftl_l2p_flat.o 00:02:01.838 CC lib/nvmf/transport.o 00:02:01.838 CC lib/ftl/ftl_nv_cache.o 00:02:01.838 CC lib/ftl/ftl_band.o 00:02:01.838 CC lib/nvmf/tcp.o 00:02:01.838 CC lib/ftl/ftl_band_ops.o 00:02:01.838 CC lib/nvmf/stubs.o 00:02:01.838 CC lib/nvmf/mdns_server.o 00:02:01.838 CC lib/ftl/ftl_writer.o 00:02:01.838 CC lib/nvmf/vfio_user.o 00:02:01.838 CC lib/ftl/ftl_rq.o 00:02:01.838 CC lib/nvmf/rdma.o 00:02:01.838 CC lib/ftl/ftl_reloc.o 00:02:01.838 CC lib/ftl/ftl_l2p_cache.o 00:02:01.838 CC lib/nvmf/auth.o 00:02:01.838 CC lib/ftl/ftl_p2l.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:01.838 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:01.838 CC lib/ftl/utils/ftl_conf.o 00:02:01.838 CC lib/ftl/utils/ftl_md.o 00:02:01.838 CC lib/ftl/utils/ftl_bitmap.o 00:02:01.838 CC lib/ftl/utils/ftl_mempool.o 00:02:01.838 CC lib/ftl/utils/ftl_property.o 00:02:01.838 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:01.838 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:01.838 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:01.838 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:01.838 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:01.838 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:01.838 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:01.838 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:01.838 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:01.838 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:01.838 CC lib/ftl/base/ftl_base_bdev.o 00:02:01.838 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:01.838 CC lib/ftl/base/ftl_base_dev.o 00:02:01.838 CC lib/ftl/ftl_trace.o 00:02:02.098 LIB libspdk_lvol.a 00:02:02.098 LIB libspdk_blobfs.a 00:02:02.098 LIB libspdk_nbd.a 00:02:02.358 LIB libspdk_scsi.a 00:02:02.358 LIB libspdk_ublk.a 00:02:02.618 CC lib/iscsi/conn.o 00:02:02.618 CC lib/iscsi/iscsi.o 00:02:02.618 CC lib/iscsi/init_grp.o 00:02:02.618 CC lib/iscsi/md5.o 00:02:02.618 CC lib/iscsi/param.o 00:02:02.618 CC lib/iscsi/portal_grp.o 00:02:02.618 CC lib/iscsi/tgt_node.o 00:02:02.618 CC lib/iscsi/iscsi_subsystem.o 00:02:02.618 CC lib/iscsi/iscsi_rpc.o 00:02:02.618 CC lib/iscsi/task.o 00:02:02.618 CC lib/vhost/vhost.o 00:02:02.618 CC lib/vhost/vhost_rpc.o 00:02:02.618 CC lib/vhost/vhost_scsi.o 00:02:02.618 CC lib/vhost/vhost_blk.o 00:02:02.618 CC lib/vhost/rte_vhost_user.o 00:02:02.618 LIB libspdk_ftl.a 00:02:03.190 LIB libspdk_iscsi.a 00:02:03.190 LIB libspdk_nvmf.a 00:02:03.190 LIB libspdk_vhost.a 00:02:03.763 CC module/vfu_device/vfu_virtio.o 00:02:03.763 CC module/vfu_device/vfu_virtio_blk.o 00:02:03.763 CC module/env_dpdk/env_dpdk_rpc.o 00:02:03.763 CC module/vfu_device/vfu_virtio_scsi.o 00:02:03.763 CC module/vfu_device/vfu_virtio_rpc.o 00:02:04.024 CC module/scheduler/gscheduler/gscheduler.o 00:02:04.024 CC module/accel/dsa/accel_dsa_rpc.o 00:02:04.024 CC module/accel/dsa/accel_dsa.o 00:02:04.024 CC module/accel/iaa/accel_iaa.o 00:02:04.024 CC module/accel/iaa/accel_iaa_rpc.o 00:02:04.024 LIB libspdk_env_dpdk_rpc.a 00:02:04.024 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:04.024 CC module/keyring/linux/keyring.o 00:02:04.024 CC module/sock/posix/posix.o 00:02:04.024 CC module/accel/error/accel_error.o 00:02:04.024 CC module/accel/ioat/accel_ioat.o 00:02:04.024 CC module/keyring/file/keyring.o 00:02:04.024 CC module/keyring/linux/keyring_rpc.o 00:02:04.024 CC module/keyring/file/keyring_rpc.o 00:02:04.024 CC module/accel/ioat/accel_ioat_rpc.o 00:02:04.024 CC module/accel/error/accel_error_rpc.o 00:02:04.024 CC module/blob/bdev/blob_bdev.o 00:02:04.024 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:04.024 LIB libspdk_scheduler_gscheduler.a 00:02:04.024 LIB libspdk_keyring_file.a 00:02:04.024 LIB libspdk_keyring_linux.a 00:02:04.024 LIB libspdk_scheduler_dpdk_governor.a 00:02:04.024 LIB libspdk_accel_error.a 00:02:04.024 LIB libspdk_accel_ioat.a 00:02:04.024 LIB libspdk_accel_iaa.a 00:02:04.024 LIB libspdk_scheduler_dynamic.a 00:02:04.024 LIB libspdk_accel_dsa.a 00:02:04.024 LIB libspdk_blob_bdev.a 00:02:04.286 LIB libspdk_vfu_device.a 00:02:04.546 LIB libspdk_sock_posix.a 00:02:04.546 CC module/blobfs/bdev/blobfs_bdev.o 00:02:04.546 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:04.546 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:04.546 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:04.546 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:04.546 CC module/bdev/malloc/bdev_malloc.o 00:02:04.546 CC module/bdev/gpt/gpt.o 00:02:04.546 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:04.546 CC module/bdev/error/vbdev_error_rpc.o 00:02:04.546 CC module/bdev/gpt/vbdev_gpt.o 00:02:04.547 CC module/bdev/error/vbdev_error.o 00:02:04.547 CC module/bdev/delay/vbdev_delay.o 00:02:04.547 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:04.547 CC module/bdev/aio/bdev_aio.o 00:02:04.547 CC module/bdev/null/bdev_null.o 00:02:04.547 CC module/bdev/null/bdev_null_rpc.o 00:02:04.547 CC module/bdev/aio/bdev_aio_rpc.o 00:02:04.547 CC module/bdev/raid/bdev_raid.o 00:02:04.547 CC module/bdev/lvol/vbdev_lvol.o 00:02:04.547 CC module/bdev/nvme/bdev_nvme.o 00:02:04.547 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:04.547 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:04.547 CC module/bdev/raid/bdev_raid_rpc.o 00:02:04.547 CC module/bdev/split/vbdev_split.o 00:02:04.547 CC module/bdev/nvme/nvme_rpc.o 00:02:04.547 CC module/bdev/split/vbdev_split_rpc.o 00:02:04.547 CC module/bdev/raid/bdev_raid_sb.o 00:02:04.547 CC module/bdev/nvme/bdev_mdns_client.o 00:02:04.547 CC module/bdev/raid/raid0.o 00:02:04.547 CC module/bdev/nvme/vbdev_opal.o 00:02:04.547 CC module/bdev/raid/raid1.o 00:02:04.547 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:04.547 CC module/bdev/raid/concat.o 00:02:04.547 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:04.547 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:04.547 CC module/bdev/ftl/bdev_ftl.o 00:02:04.547 CC module/bdev/passthru/vbdev_passthru.o 00:02:04.547 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:04.547 CC module/bdev/iscsi/bdev_iscsi.o 00:02:04.547 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:04.547 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:04.547 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:04.805 LIB libspdk_blobfs_bdev.a 00:02:04.805 LIB libspdk_bdev_split.a 00:02:04.805 LIB libspdk_bdev_gpt.a 00:02:04.805 LIB libspdk_bdev_error.a 00:02:04.805 LIB libspdk_bdev_delay.a 00:02:04.805 LIB libspdk_bdev_null.a 00:02:04.805 LIB libspdk_bdev_ftl.a 00:02:04.805 LIB libspdk_bdev_aio.a 00:02:04.805 LIB libspdk_bdev_passthru.a 00:02:04.805 LIB libspdk_bdev_malloc.a 00:02:04.805 LIB libspdk_bdev_zone_block.a 00:02:04.805 LIB libspdk_bdev_iscsi.a 00:02:05.065 LIB libspdk_bdev_virtio.a 00:02:05.065 LIB libspdk_bdev_lvol.a 00:02:05.325 LIB libspdk_bdev_raid.a 00:02:06.325 LIB libspdk_bdev_nvme.a 00:02:06.899 CC module/event/subsystems/keyring/keyring.o 00:02:06.899 CC module/event/subsystems/vmd/vmd.o 00:02:06.899 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:06.899 CC module/event/subsystems/sock/sock.o 00:02:06.899 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:06.899 CC module/event/subsystems/scheduler/scheduler.o 00:02:06.899 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:06.899 CC module/event/subsystems/iobuf/iobuf.o 00:02:06.899 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:06.899 LIB libspdk_event_vfu_tgt.a 00:02:06.899 LIB libspdk_event_keyring.a 00:02:06.899 LIB libspdk_event_sock.a 00:02:06.899 LIB libspdk_event_scheduler.a 00:02:06.899 LIB libspdk_event_vhost_blk.a 00:02:06.899 LIB libspdk_event_vmd.a 00:02:07.160 LIB libspdk_event_iobuf.a 00:02:07.422 CC module/event/subsystems/accel/accel.o 00:02:07.422 LIB libspdk_event_accel.a 00:02:07.684 CC module/event/subsystems/bdev/bdev.o 00:02:07.946 LIB libspdk_event_bdev.a 00:02:08.208 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:08.208 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:08.208 CC module/event/subsystems/nbd/nbd.o 00:02:08.208 CC module/event/subsystems/ublk/ublk.o 00:02:08.208 CC module/event/subsystems/scsi/scsi.o 00:02:08.469 LIB libspdk_event_nbd.a 00:02:08.469 LIB libspdk_event_ublk.a 00:02:08.469 LIB libspdk_event_scsi.a 00:02:08.469 LIB libspdk_event_nvmf.a 00:02:08.730 CC module/event/subsystems/iscsi/iscsi.o 00:02:08.730 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:08.730 LIB libspdk_event_vhost_scsi.a 00:02:08.730 LIB libspdk_event_iscsi.a 00:02:09.299 CXX app/trace/trace.o 00:02:09.299 CC app/trace_record/trace_record.o 00:02:09.299 CC app/spdk_top/spdk_top.o 00:02:09.299 CC app/spdk_nvme_perf/perf.o 00:02:09.299 CC app/spdk_nvme_identify/identify.o 00:02:09.299 CC app/spdk_nvme_discover/discovery_aer.o 00:02:09.299 CC app/spdk_lspci/spdk_lspci.o 00:02:09.299 TEST_HEADER include/spdk/accel.h 00:02:09.299 TEST_HEADER include/spdk/accel_module.h 00:02:09.299 TEST_HEADER include/spdk/assert.h 00:02:09.299 TEST_HEADER include/spdk/barrier.h 00:02:09.299 TEST_HEADER include/spdk/base64.h 00:02:09.300 TEST_HEADER include/spdk/bdev.h 00:02:09.300 TEST_HEADER include/spdk/bdev_zone.h 00:02:09.300 TEST_HEADER include/spdk/bdev_module.h 00:02:09.300 CC test/rpc_client/rpc_client_test.o 00:02:09.300 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:09.300 TEST_HEADER include/spdk/bit_array.h 00:02:09.300 TEST_HEADER include/spdk/bit_pool.h 00:02:09.300 TEST_HEADER include/spdk/blob_bdev.h 00:02:09.300 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:09.300 TEST_HEADER include/spdk/blob.h 00:02:09.300 TEST_HEADER include/spdk/blobfs.h 00:02:09.300 TEST_HEADER include/spdk/conf.h 00:02:09.300 TEST_HEADER include/spdk/crc16.h 00:02:09.300 TEST_HEADER include/spdk/config.h 00:02:09.300 TEST_HEADER include/spdk/cpuset.h 00:02:09.300 TEST_HEADER include/spdk/crc32.h 00:02:09.300 TEST_HEADER include/spdk/dma.h 00:02:09.300 TEST_HEADER include/spdk/crc64.h 00:02:09.300 TEST_HEADER include/spdk/dif.h 00:02:09.300 TEST_HEADER include/spdk/endian.h 00:02:09.300 TEST_HEADER include/spdk/env_dpdk.h 00:02:09.300 TEST_HEADER include/spdk/env.h 00:02:09.300 TEST_HEADER include/spdk/fd_group.h 00:02:09.300 TEST_HEADER include/spdk/event.h 00:02:09.300 TEST_HEADER include/spdk/file.h 00:02:09.300 TEST_HEADER include/spdk/ftl.h 00:02:09.300 TEST_HEADER include/spdk/fd.h 00:02:09.300 CC app/spdk_dd/spdk_dd.o 00:02:09.300 TEST_HEADER include/spdk/gpt_spec.h 00:02:09.300 TEST_HEADER include/spdk/hexlify.h 00:02:09.300 TEST_HEADER include/spdk/idxd.h 00:02:09.300 TEST_HEADER include/spdk/histogram_data.h 00:02:09.300 TEST_HEADER include/spdk/idxd_spec.h 00:02:09.300 CC app/nvmf_tgt/nvmf_main.o 00:02:09.300 TEST_HEADER include/spdk/ioat.h 00:02:09.300 TEST_HEADER include/spdk/init.h 00:02:09.300 TEST_HEADER include/spdk/ioat_spec.h 00:02:09.300 TEST_HEADER include/spdk/json.h 00:02:09.300 CC app/iscsi_tgt/iscsi_tgt.o 00:02:09.300 TEST_HEADER include/spdk/iscsi_spec.h 00:02:09.300 TEST_HEADER include/spdk/jsonrpc.h 00:02:09.300 TEST_HEADER include/spdk/keyring_module.h 00:02:09.300 TEST_HEADER include/spdk/keyring.h 00:02:09.300 TEST_HEADER include/spdk/likely.h 00:02:09.300 TEST_HEADER include/spdk/log.h 00:02:09.300 TEST_HEADER include/spdk/lvol.h 00:02:09.300 TEST_HEADER include/spdk/mmio.h 00:02:09.300 TEST_HEADER include/spdk/memory.h 00:02:09.300 TEST_HEADER include/spdk/nbd.h 00:02:09.300 TEST_HEADER include/spdk/notify.h 00:02:09.300 TEST_HEADER include/spdk/nvme_intel.h 00:02:09.300 TEST_HEADER include/spdk/nvme.h 00:02:09.300 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:09.300 TEST_HEADER include/spdk/nvme_spec.h 00:02:09.300 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:09.300 TEST_HEADER include/spdk/nvme_zns.h 00:02:09.300 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:09.300 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:09.300 TEST_HEADER include/spdk/nvmf.h 00:02:09.300 TEST_HEADER include/spdk/nvmf_spec.h 00:02:09.300 TEST_HEADER include/spdk/nvmf_transport.h 00:02:09.300 TEST_HEADER include/spdk/opal_spec.h 00:02:09.300 TEST_HEADER include/spdk/pci_ids.h 00:02:09.300 TEST_HEADER include/spdk/opal.h 00:02:09.300 TEST_HEADER include/spdk/queue.h 00:02:09.300 TEST_HEADER include/spdk/pipe.h 00:02:09.300 TEST_HEADER include/spdk/reduce.h 00:02:09.300 CC app/spdk_tgt/spdk_tgt.o 00:02:09.300 TEST_HEADER include/spdk/rpc.h 00:02:09.300 TEST_HEADER include/spdk/scheduler.h 00:02:09.300 TEST_HEADER include/spdk/scsi.h 00:02:09.300 TEST_HEADER include/spdk/scsi_spec.h 00:02:09.300 TEST_HEADER include/spdk/sock.h 00:02:09.300 TEST_HEADER include/spdk/stdinc.h 00:02:09.300 TEST_HEADER include/spdk/string.h 00:02:09.300 TEST_HEADER include/spdk/thread.h 00:02:09.300 TEST_HEADER include/spdk/trace_parser.h 00:02:09.300 TEST_HEADER include/spdk/trace.h 00:02:09.300 TEST_HEADER include/spdk/tree.h 00:02:09.300 TEST_HEADER include/spdk/ublk.h 00:02:09.300 TEST_HEADER include/spdk/util.h 00:02:09.300 TEST_HEADER include/spdk/uuid.h 00:02:09.300 TEST_HEADER include/spdk/version.h 00:02:09.300 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:09.300 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:09.300 CC examples/ioat/perf/perf.o 00:02:09.300 TEST_HEADER include/spdk/vhost.h 00:02:09.300 TEST_HEADER include/spdk/vmd.h 00:02:09.300 TEST_HEADER include/spdk/xor.h 00:02:09.300 CC examples/ioat/verify/verify.o 00:02:09.300 TEST_HEADER include/spdk/zipf.h 00:02:09.300 CXX test/cpp_headers/accel.o 00:02:09.300 CXX test/cpp_headers/accel_module.o 00:02:09.300 CXX test/cpp_headers/barrier.o 00:02:09.300 CXX test/cpp_headers/assert.o 00:02:09.300 CXX test/cpp_headers/base64.o 00:02:09.300 CXX test/cpp_headers/bdev_zone.o 00:02:09.300 CXX test/cpp_headers/bdev.o 00:02:09.300 CXX test/cpp_headers/bit_array.o 00:02:09.300 CXX test/cpp_headers/bdev_module.o 00:02:09.300 CXX test/cpp_headers/bit_pool.o 00:02:09.300 CC examples/util/zipf/zipf.o 00:02:09.300 CXX test/cpp_headers/blob_bdev.o 00:02:09.300 CXX test/cpp_headers/blobfs.o 00:02:09.300 CXX test/cpp_headers/blobfs_bdev.o 00:02:09.300 CC app/fio/nvme/fio_plugin.o 00:02:09.300 CXX test/cpp_headers/blob.o 00:02:09.300 CXX test/cpp_headers/conf.o 00:02:09.300 CXX test/cpp_headers/config.o 00:02:09.300 CXX test/cpp_headers/cpuset.o 00:02:09.300 CXX test/cpp_headers/crc16.o 00:02:09.300 CXX test/cpp_headers/crc64.o 00:02:09.300 CXX test/cpp_headers/crc32.o 00:02:09.300 CXX test/cpp_headers/dif.o 00:02:09.300 CXX test/cpp_headers/dma.o 00:02:09.300 CXX test/cpp_headers/env_dpdk.o 00:02:09.300 CXX test/cpp_headers/endian.o 00:02:09.300 CC test/env/pci/pci_ut.o 00:02:09.300 CXX test/cpp_headers/event.o 00:02:09.300 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:09.300 CXX test/cpp_headers/env.o 00:02:09.300 CXX test/cpp_headers/fd.o 00:02:09.300 CXX test/cpp_headers/file.o 00:02:09.300 CXX test/cpp_headers/fd_group.o 00:02:09.300 CXX test/cpp_headers/ftl.o 00:02:09.300 CXX test/cpp_headers/gpt_spec.o 00:02:09.300 CXX test/cpp_headers/hexlify.o 00:02:09.300 CXX test/cpp_headers/histogram_data.o 00:02:09.300 CXX test/cpp_headers/idxd_spec.o 00:02:09.300 CC test/env/vtophys/vtophys.o 00:02:09.300 CXX test/cpp_headers/idxd.o 00:02:09.300 CXX test/cpp_headers/init.o 00:02:09.300 CXX test/cpp_headers/ioat.o 00:02:09.300 CC test/env/memory/memory_ut.o 00:02:09.300 CXX test/cpp_headers/ioat_spec.o 00:02:09.300 LINK spdk_lspci 00:02:09.300 CXX test/cpp_headers/json.o 00:02:09.300 CXX test/cpp_headers/iscsi_spec.o 00:02:09.300 CXX test/cpp_headers/jsonrpc.o 00:02:09.300 CC test/app/jsoncat/jsoncat.o 00:02:09.300 CC test/app/histogram_perf/histogram_perf.o 00:02:09.300 CXX test/cpp_headers/keyring.o 00:02:09.300 CXX test/cpp_headers/keyring_module.o 00:02:09.300 CXX test/cpp_headers/likely.o 00:02:09.300 CXX test/cpp_headers/log.o 00:02:09.300 CXX test/cpp_headers/lvol.o 00:02:09.300 CXX test/cpp_headers/mmio.o 00:02:09.300 CXX test/cpp_headers/nbd.o 00:02:09.300 CXX test/cpp_headers/nvme.o 00:02:09.300 CXX test/cpp_headers/notify.o 00:02:09.300 CXX test/cpp_headers/memory.o 00:02:09.300 CXX test/cpp_headers/nvme_intel.o 00:02:09.300 CC test/thread/poller_perf/poller_perf.o 00:02:09.300 CXX test/cpp_headers/nvme_ocssd.o 00:02:09.300 CXX test/cpp_headers/nvme_spec.o 00:02:09.300 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:09.300 CXX test/cpp_headers/nvmf_cmd.o 00:02:09.300 CXX test/cpp_headers/nvmf.o 00:02:09.300 CXX test/cpp_headers/nvme_zns.o 00:02:09.300 CXX test/cpp_headers/nvmf_spec.o 00:02:09.300 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:09.300 CXX test/cpp_headers/nvmf_transport.o 00:02:09.300 CXX test/cpp_headers/opal.o 00:02:09.300 CC test/thread/lock/spdk_lock.o 00:02:09.300 CXX test/cpp_headers/opal_spec.o 00:02:09.300 CC test/app/stub/stub.o 00:02:09.300 CXX test/cpp_headers/reduce.o 00:02:09.300 CXX test/cpp_headers/queue.o 00:02:09.300 CXX test/cpp_headers/pci_ids.o 00:02:09.300 CXX test/cpp_headers/pipe.o 00:02:09.300 CXX test/cpp_headers/rpc.o 00:02:09.300 CXX test/cpp_headers/scsi.o 00:02:09.300 CXX test/cpp_headers/scheduler.o 00:02:09.300 CXX test/cpp_headers/scsi_spec.o 00:02:09.300 CC app/fio/bdev/fio_plugin.o 00:02:09.300 CXX test/cpp_headers/sock.o 00:02:09.300 CXX test/cpp_headers/stdinc.o 00:02:09.300 CXX test/cpp_headers/string.o 00:02:09.300 CXX test/cpp_headers/thread.o 00:02:09.300 CXX test/cpp_headers/trace.o 00:02:09.300 CXX test/cpp_headers/trace_parser.o 00:02:09.300 CXX test/cpp_headers/tree.o 00:02:09.300 CXX test/cpp_headers/ublk.o 00:02:09.300 CXX test/cpp_headers/util.o 00:02:09.300 CXX test/cpp_headers/uuid.o 00:02:09.300 CXX test/cpp_headers/version.o 00:02:09.300 CXX test/cpp_headers/vfio_user_spec.o 00:02:09.300 CXX test/cpp_headers/vfio_user_pci.o 00:02:09.300 CXX test/cpp_headers/vhost.o 00:02:09.300 CXX test/cpp_headers/vmd.o 00:02:09.300 CXX test/cpp_headers/xor.o 00:02:09.300 CXX test/cpp_headers/zipf.o 00:02:09.300 LINK spdk_nvme_discover 00:02:09.300 CC test/app/bdev_svc/bdev_svc.o 00:02:09.300 LINK rpc_client_test 00:02:09.300 LINK spdk_trace_record 00:02:09.300 CC test/dma/test_dma/test_dma.o 00:02:09.560 LINK interrupt_tgt 00:02:09.560 LINK nvmf_tgt 00:02:09.560 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:09.560 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:09.560 LINK zipf 00:02:09.560 CC test/env/mem_callbacks/mem_callbacks.o 00:02:09.560 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:09.560 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:09.560 LINK iscsi_tgt 00:02:09.560 LINK jsoncat 00:02:09.560 LINK poller_perf 00:02:09.560 LINK ioat_perf 00:02:09.560 LINK histogram_perf 00:02:09.560 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:09.560 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:09.560 LINK env_dpdk_post_init 00:02:09.560 LINK spdk_tgt 00:02:09.560 LINK verify 00:02:09.560 LINK spdk_trace 00:02:09.560 LINK vtophys 00:02:09.560 LINK bdev_svc 00:02:09.560 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:09.560 struct spdk_nvme_fdp_ruhs ruhs; 00:02:09.560 ^ 00:02:09.560 LINK stub 00:02:09.560 LINK spdk_dd 00:02:09.820 LINK pci_ut 00:02:09.820 LINK spdk_top 00:02:09.820 LINK llvm_vfio_fuzz 00:02:09.820 LINK vhost_fuzz 00:02:09.820 LINK spdk_nvme_identify 00:02:09.820 LINK test_dma 00:02:09.820 1 warning generated. 00:02:09.820 LINK nvme_fuzz 00:02:09.820 LINK spdk_nvme_perf 00:02:09.820 LINK spdk_bdev 00:02:09.820 LINK spdk_nvme 00:02:09.820 LINK mem_callbacks 00:02:10.081 CC app/vhost/vhost.o 00:02:10.081 CC examples/idxd/perf/perf.o 00:02:10.081 CC examples/vmd/lsvmd/lsvmd.o 00:02:10.081 CC examples/vmd/led/led.o 00:02:10.081 LINK llvm_nvme_fuzz 00:02:10.081 CC examples/sock/hello_world/hello_sock.o 00:02:10.081 LINK memory_ut 00:02:10.081 CC examples/thread/thread/thread_ex.o 00:02:10.081 LINK led 00:02:10.081 LINK lsvmd 00:02:10.081 LINK vhost 00:02:10.342 LINK spdk_lock 00:02:10.342 LINK hello_sock 00:02:10.342 LINK idxd_perf 00:02:10.342 LINK thread 00:02:10.342 LINK iscsi_fuzz 00:02:10.913 CC test/event/reactor/reactor.o 00:02:10.913 CC test/event/event_perf/event_perf.o 00:02:10.913 CC test/event/reactor_perf/reactor_perf.o 00:02:10.913 CC test/event/app_repeat/app_repeat.o 00:02:10.913 CC examples/nvme/abort/abort.o 00:02:10.913 CC examples/nvme/hello_world/hello_world.o 00:02:10.913 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:10.913 CC examples/nvme/reconnect/reconnect.o 00:02:10.913 CC examples/nvme/hotplug/hotplug.o 00:02:10.913 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:10.913 CC examples/nvme/arbitration/arbitration.o 00:02:10.913 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:10.913 CC test/event/scheduler/scheduler.o 00:02:10.913 LINK reactor 00:02:10.913 LINK event_perf 00:02:10.913 LINK reactor_perf 00:02:10.913 LINK app_repeat 00:02:10.913 LINK hello_world 00:02:10.914 LINK cmb_copy 00:02:11.174 LINK pmr_persistence 00:02:11.174 LINK hotplug 00:02:11.174 LINK scheduler 00:02:11.174 LINK reconnect 00:02:11.174 LINK abort 00:02:11.174 LINK arbitration 00:02:11.174 LINK nvme_manage 00:02:11.435 CC test/nvme/e2edp/nvme_dp.o 00:02:11.435 CC test/nvme/startup/startup.o 00:02:11.435 CC test/nvme/reset/reset.o 00:02:11.435 CC test/nvme/overhead/overhead.o 00:02:11.435 CC test/nvme/sgl/sgl.o 00:02:11.435 CC test/nvme/connect_stress/connect_stress.o 00:02:11.435 CC test/nvme/simple_copy/simple_copy.o 00:02:11.435 CC test/nvme/err_injection/err_injection.o 00:02:11.435 CC test/nvme/boot_partition/boot_partition.o 00:02:11.435 CC test/nvme/fdp/fdp.o 00:02:11.435 CC test/nvme/aer/aer.o 00:02:11.435 CC test/nvme/reserve/reserve.o 00:02:11.435 CC test/nvme/fused_ordering/fused_ordering.o 00:02:11.435 CC test/nvme/cuse/cuse.o 00:02:11.435 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:11.435 CC test/nvme/compliance/nvme_compliance.o 00:02:11.435 CC test/accel/dif/dif.o 00:02:11.435 CC test/blobfs/mkfs/mkfs.o 00:02:11.435 CC test/lvol/esnap/esnap.o 00:02:11.435 LINK simple_copy 00:02:11.435 LINK startup 00:02:11.435 LINK connect_stress 00:02:11.435 LINK overhead 00:02:11.435 LINK boot_partition 00:02:11.435 LINK err_injection 00:02:11.435 LINK aer 00:02:11.435 LINK doorbell_aers 00:02:11.435 LINK nvme_dp 00:02:11.435 LINK reserve 00:02:11.435 LINK fused_ordering 00:02:11.435 LINK sgl 00:02:11.694 LINK reset 00:02:11.694 LINK mkfs 00:02:11.694 LINK fdp 00:02:11.694 LINK nvme_compliance 00:02:11.694 LINK dif 00:02:11.694 CC examples/accel/perf/accel_perf.o 00:02:11.954 CC examples/blob/hello_world/hello_blob.o 00:02:11.954 CC examples/blob/cli/blobcli.o 00:02:11.954 LINK hello_blob 00:02:12.215 LINK accel_perf 00:02:12.215 LINK blobcli 00:02:12.215 LINK cuse 00:02:13.170 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.170 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.170 LINK hello_bdev 00:02:13.431 CC test/bdev/bdevio/bdevio.o 00:02:13.431 LINK bdevperf 00:02:13.691 LINK bdevio 00:02:15.077 CC examples/nvmf/nvmf/nvmf.o 00:02:15.338 LINK nvmf 00:02:15.599 LINK esnap 00:02:16.987 00:02:16.987 real 0m46.642s 00:02:16.987 user 5m35.660s 00:02:16.987 sys 2m21.915s 00:02:16.987 13:28:05 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:16.987 13:28:05 make -- common/autotest_common.sh@10 -- $ set +x 00:02:16.987 ************************************ 00:02:16.987 END TEST make 00:02:16.987 ************************************ 00:02:16.987 13:28:05 -- common/autotest_common.sh@1142 -- $ return 0 00:02:16.987 13:28:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:16.987 13:28:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:16.987 13:28:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:16.987 13:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.987 13:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:16.987 13:28:05 -- pm/common@44 -- $ pid=2294436 00:02:16.987 13:28:05 -- pm/common@50 -- $ kill -TERM 2294436 00:02:16.987 13:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.987 13:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:16.987 13:28:05 -- pm/common@44 -- $ pid=2294437 00:02:16.987 13:28:05 -- pm/common@50 -- $ kill -TERM 2294437 00:02:16.987 13:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.987 13:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:16.987 13:28:05 -- pm/common@44 -- $ pid=2294439 00:02:16.987 13:28:05 -- pm/common@50 -- $ kill -TERM 2294439 00:02:16.987 13:28:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.987 13:28:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:16.987 13:28:05 -- pm/common@44 -- $ pid=2294463 00:02:16.987 13:28:05 -- pm/common@50 -- $ sudo -E kill -TERM 2294463 00:02:16.987 13:28:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:16.987 13:28:05 -- nvmf/common.sh@7 -- # uname -s 00:02:16.987 13:28:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:16.987 13:28:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:16.987 13:28:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:16.987 13:28:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:16.987 13:28:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:16.987 13:28:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:16.987 13:28:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:16.987 13:28:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:16.987 13:28:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:16.987 13:28:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:16.987 13:28:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:16.987 13:28:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:16.987 13:28:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:16.988 13:28:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:16.988 13:28:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:16.988 13:28:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:16.988 13:28:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:16.988 13:28:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:16.988 13:28:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.988 13:28:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.988 13:28:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.988 13:28:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.988 13:28:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.988 13:28:05 -- paths/export.sh@5 -- # export PATH 00:02:16.988 13:28:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.988 13:28:05 -- nvmf/common.sh@47 -- # : 0 00:02:16.988 13:28:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:16.988 13:28:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:16.988 13:28:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:16.988 13:28:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:16.988 13:28:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:16.988 13:28:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:16.988 13:28:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:16.988 13:28:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:16.988 13:28:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:16.988 13:28:05 -- spdk/autotest.sh@32 -- # uname -s 00:02:16.988 13:28:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:16.988 13:28:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:16.988 13:28:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:16.988 13:28:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:16.988 13:28:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:16.988 13:28:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:16.988 13:28:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:16.988 13:28:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:16.988 13:28:05 -- spdk/autotest.sh@48 -- # udevadm_pid=2359942 00:02:16.988 13:28:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:16.988 13:28:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:16.988 13:28:05 -- pm/common@17 -- # local monitor 00:02:16.988 13:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.988 13:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.988 13:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.988 13:28:05 -- pm/common@21 -- # date +%s 00:02:16.988 13:28:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.988 13:28:05 -- pm/common@25 -- # sleep 1 00:02:16.988 13:28:05 -- pm/common@21 -- # date +%s 00:02:16.988 13:28:05 -- pm/common@21 -- # date +%s 00:02:16.988 13:28:05 -- pm/common@21 -- # date +%s 00:02:16.988 13:28:05 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720783685 00:02:16.988 13:28:05 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720783685 00:02:16.988 13:28:05 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720783685 00:02:16.988 13:28:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720783685 00:02:16.988 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720783685_collect-vmstat.pm.log 00:02:16.988 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720783685_collect-cpu-load.pm.log 00:02:16.988 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720783685_collect-cpu-temp.pm.log 00:02:16.988 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720783685_collect-bmc-pm.bmc.pm.log 00:02:17.932 13:28:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:17.932 13:28:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:17.932 13:28:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:17.932 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:02:17.932 13:28:06 -- spdk/autotest.sh@59 -- # create_test_list 00:02:17.932 13:28:06 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:17.932 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:02:17.932 13:28:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:17.932 13:28:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.932 13:28:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.932 13:28:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:17.932 13:28:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.932 13:28:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:17.932 13:28:06 -- common/autotest_common.sh@1455 -- # uname 00:02:17.932 13:28:06 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:17.932 13:28:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:17.932 13:28:06 -- common/autotest_common.sh@1475 -- # uname 00:02:17.932 13:28:06 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:17.932 13:28:06 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:18.192 13:28:06 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:18.192 13:28:06 -- spdk/autotest.sh@72 -- # hash lcov 00:02:18.192 13:28:06 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:18.192 13:28:06 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:18.192 13:28:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:18.192 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:02:18.192 13:28:06 -- spdk/autotest.sh@91 -- # rm -f 00:02:18.193 13:28:06 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:22.390 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:22.391 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:22.391 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:22.391 13:28:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:22.391 13:28:10 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:22.391 13:28:10 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:22.391 13:28:10 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:22.391 13:28:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:22.391 13:28:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:22.391 13:28:10 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:22.391 13:28:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.391 13:28:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:22.391 13:28:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:22.391 13:28:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:22.391 13:28:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:22.391 13:28:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:22.391 13:28:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:22.391 13:28:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:22.391 No valid GPT data, bailing 00:02:22.391 13:28:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:22.391 13:28:10 -- scripts/common.sh@391 -- # pt= 00:02:22.391 13:28:10 -- scripts/common.sh@392 -- # return 1 00:02:22.391 13:28:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:22.391 1+0 records in 00:02:22.391 1+0 records out 00:02:22.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480036 s, 218 MB/s 00:02:22.391 13:28:10 -- spdk/autotest.sh@118 -- # sync 00:02:22.391 13:28:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:22.391 13:28:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:22.391 13:28:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:30.524 13:28:18 -- spdk/autotest.sh@124 -- # uname -s 00:02:30.524 13:28:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:30.524 13:28:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:30.524 13:28:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:30.524 13:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:30.524 13:28:18 -- common/autotest_common.sh@10 -- # set +x 00:02:30.524 ************************************ 00:02:30.524 START TEST setup.sh 00:02:30.524 ************************************ 00:02:30.524 13:28:18 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:30.524 * Looking for test storage... 00:02:30.524 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:30.524 13:28:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:30.524 13:28:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:30.524 13:28:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:30.524 13:28:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:30.524 13:28:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:30.524 13:28:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:30.524 ************************************ 00:02:30.524 START TEST acl 00:02:30.524 ************************************ 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:30.524 * Looking for test storage... 00:02:30.524 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:30.524 13:28:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:30.524 13:28:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:30.524 13:28:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:30.524 13:28:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:30.524 13:28:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:30.524 13:28:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:30.524 13:28:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:30.524 13:28:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:30.524 13:28:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.727 13:28:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:34.727 13:28:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:34.727 13:28:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.727 13:28:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:34.727 13:28:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.727 13:28:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:38.026 Hugepages 00:02:38.026 node hugesize free / total 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 00:02:38.026 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.026 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:38.027 13:28:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:38.027 13:28:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.027 13:28:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.027 13:28:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:38.027 ************************************ 00:02:38.027 START TEST denied 00:02:38.027 ************************************ 00:02:38.027 13:28:26 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:38.027 13:28:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:38.027 13:28:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:38.027 13:28:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:38.027 13:28:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.027 13:28:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:42.280 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.280 13:28:30 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.601 00:02:47.601 real 0m8.967s 00:02:47.601 user 0m2.984s 00:02:47.601 sys 0m5.320s 00:02:47.601 13:28:35 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:47.601 13:28:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:47.601 ************************************ 00:02:47.601 END TEST denied 00:02:47.601 ************************************ 00:02:47.601 13:28:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:47.601 13:28:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:47.601 13:28:35 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:47.601 13:28:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.601 13:28:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:47.601 ************************************ 00:02:47.601 START TEST allowed 00:02:47.601 ************************************ 00:02:47.601 13:28:35 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:47.601 13:28:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:02:47.601 13:28:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:47.601 13:28:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:02:47.601 13:28:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.601 13:28:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:52.885 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:52.885 13:28:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:52.885 13:28:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:52.885 13:28:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:52.885 13:28:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.885 13:28:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.116 00:02:57.116 real 0m9.767s 00:02:57.116 user 0m2.921s 00:02:57.116 sys 0m5.161s 00:02:57.116 13:28:45 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.116 13:28:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:57.116 ************************************ 00:02:57.116 END TEST allowed 00:02:57.116 ************************************ 00:02:57.116 13:28:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:57.116 00:02:57.116 real 0m26.548s 00:02:57.116 user 0m8.663s 00:02:57.116 sys 0m15.599s 00:02:57.116 13:28:45 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.116 13:28:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:57.116 ************************************ 00:02:57.116 END TEST acl 00:02:57.116 ************************************ 00:02:57.116 13:28:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:57.116 13:28:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:57.116 13:28:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.116 13:28:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.116 13:28:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:57.116 ************************************ 00:02:57.116 START TEST hugepages 00:02:57.116 ************************************ 00:02:57.116 13:28:45 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:57.116 * Looking for test storage... 00:02:57.116 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 103984076 kB' 'MemAvailable: 107675356 kB' 'Buffers: 4152 kB' 'Cached: 12270468 kB' 'SwapCached: 0 kB' 'Active: 9195448 kB' 'Inactive: 3696268 kB' 'Active(anon): 8704016 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620472 kB' 'Mapped: 176508 kB' 'Shmem: 8086920 kB' 'KReclaimable: 544524 kB' 'Slab: 1417544 kB' 'SReclaimable: 544524 kB' 'SUnreclaim: 873020 kB' 'KernelStack: 27728 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 10315908 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238044 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.116 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.117 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:57.118 13:28:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:57.118 13:28:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.118 13:28:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.118 13:28:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:57.118 ************************************ 00:02:57.118 START TEST default_setup 00:02:57.118 ************************************ 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.118 13:28:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:01.331 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:01.331 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106251744 kB' 'MemAvailable: 109942864 kB' 'Buffers: 4152 kB' 'Cached: 12270608 kB' 'SwapCached: 0 kB' 'Active: 9213788 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722356 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638216 kB' 'Mapped: 176428 kB' 'Shmem: 8087060 kB' 'KReclaimable: 544364 kB' 'Slab: 1415012 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 870648 kB' 'KernelStack: 27824 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10336324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238220 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.331 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106251540 kB' 'MemAvailable: 109942660 kB' 'Buffers: 4152 kB' 'Cached: 12270612 kB' 'SwapCached: 0 kB' 'Active: 9212748 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721316 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637612 kB' 'Mapped: 176288 kB' 'Shmem: 8087064 kB' 'KReclaimable: 544364 kB' 'Slab: 1415004 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 870640 kB' 'KernelStack: 27792 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10336340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238204 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.332 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.333 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106251540 kB' 'MemAvailable: 109942660 kB' 'Buffers: 4152 kB' 'Cached: 12270632 kB' 'SwapCached: 0 kB' 'Active: 9212776 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721344 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637620 kB' 'Mapped: 176288 kB' 'Shmem: 8087084 kB' 'KReclaimable: 544364 kB' 'Slab: 1415004 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 870640 kB' 'KernelStack: 27792 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10336360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238204 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.334 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.335 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.336 nr_hugepages=1024 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.336 resv_hugepages=0 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.336 surplus_hugepages=0 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.336 anon_hugepages=0 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106251288 kB' 'MemAvailable: 109942408 kB' 'Buffers: 4152 kB' 'Cached: 12270652 kB' 'SwapCached: 0 kB' 'Active: 9212716 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721284 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637512 kB' 'Mapped: 176288 kB' 'Shmem: 8087104 kB' 'KReclaimable: 544364 kB' 'Slab: 1415004 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 870640 kB' 'KernelStack: 27776 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10336384 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238204 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.336 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.337 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58262816 kB' 'MemUsed: 7396192 kB' 'SwapCached: 0 kB' 'Active: 2347252 kB' 'Inactive: 283540 kB' 'Active(anon): 2189504 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2478116 kB' 'Mapped: 38664 kB' 'AnonPages: 155932 kB' 'Shmem: 2036828 kB' 'KernelStack: 13544 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 298240 kB' 'Slab: 724724 kB' 'SReclaimable: 298240 kB' 'SUnreclaim: 426484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.338 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:01.339 node0=1024 expecting 1024 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:01.339 00:03:01.339 real 0m4.168s 00:03:01.339 user 0m1.631s 00:03:01.339 sys 0m2.543s 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.339 13:28:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:01.339 ************************************ 00:03:01.339 END TEST default_setup 00:03:01.339 ************************************ 00:03:01.339 13:28:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:01.339 13:28:49 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:01.339 13:28:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.339 13:28:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.339 13:28:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.339 ************************************ 00:03:01.339 START TEST per_node_1G_alloc 00:03:01.339 ************************************ 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.339 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.340 13:28:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:05.543 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:05.543 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106382156 kB' 'MemAvailable: 110073276 kB' 'Buffers: 4152 kB' 'Cached: 12270768 kB' 'SwapCached: 0 kB' 'Active: 9213212 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721780 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637348 kB' 'Mapped: 175172 kB' 'Shmem: 8087220 kB' 'KReclaimable: 544364 kB' 'Slab: 1414176 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869812 kB' 'KernelStack: 27792 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10322288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238396 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.543 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106381816 kB' 'MemAvailable: 110072936 kB' 'Buffers: 4152 kB' 'Cached: 12270772 kB' 'SwapCached: 0 kB' 'Active: 9213636 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722204 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638332 kB' 'Mapped: 175164 kB' 'Shmem: 8087224 kB' 'KReclaimable: 544364 kB' 'Slab: 1414176 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869812 kB' 'KernelStack: 27872 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10322068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238460 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.544 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106382376 kB' 'MemAvailable: 110073496 kB' 'Buffers: 4152 kB' 'Cached: 12270772 kB' 'SwapCached: 0 kB' 'Active: 9213360 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721928 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637640 kB' 'Mapped: 175148 kB' 'Shmem: 8087224 kB' 'KReclaimable: 544364 kB' 'Slab: 1414244 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869880 kB' 'KernelStack: 27856 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10322464 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238396 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.545 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.546 nr_hugepages=1024 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.546 resv_hugepages=0 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.546 surplus_hugepages=0 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.546 anon_hugepages=0 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106385004 kB' 'MemAvailable: 110076124 kB' 'Buffers: 4152 kB' 'Cached: 12270828 kB' 'SwapCached: 0 kB' 'Active: 9213124 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721692 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637776 kB' 'Mapped: 175140 kB' 'Shmem: 8087280 kB' 'KReclaimable: 544364 kB' 'Slab: 1414244 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869880 kB' 'KernelStack: 27824 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10322852 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238460 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.546 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.547 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59380720 kB' 'MemUsed: 6278288 kB' 'SwapCached: 0 kB' 'Active: 2345132 kB' 'Inactive: 283540 kB' 'Active(anon): 2187384 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2478228 kB' 'Mapped: 37932 kB' 'AnonPages: 153704 kB' 'Shmem: 2036940 kB' 'KernelStack: 13480 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 298240 kB' 'Slab: 724472 kB' 'SReclaimable: 298240 kB' 'SUnreclaim: 426232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47004556 kB' 'MemUsed: 13675284 kB' 'SwapCached: 0 kB' 'Active: 6867716 kB' 'Inactive: 3412728 kB' 'Active(anon): 6534032 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9796776 kB' 'Mapped: 137208 kB' 'AnonPages: 483768 kB' 'Shmem: 6050364 kB' 'KernelStack: 14408 kB' 'PageTables: 5416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246124 kB' 'Slab: 689772 kB' 'SReclaimable: 246124 kB' 'SUnreclaim: 443648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.548 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:05.549 node0=512 expecting 512 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:05.549 node1=512 expecting 512 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:05.549 00:03:05.549 real 0m4.051s 00:03:05.549 user 0m1.615s 00:03:05.549 sys 0m2.497s 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.549 13:28:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.549 ************************************ 00:03:05.549 END TEST per_node_1G_alloc 00:03:05.549 ************************************ 00:03:05.549 13:28:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:05.549 13:28:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:05.549 13:28:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.549 13:28:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.549 13:28:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.549 ************************************ 00:03:05.549 START TEST even_2G_alloc 00:03:05.549 ************************************ 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.549 13:28:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:09.756 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:09.756 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106428516 kB' 'MemAvailable: 110119636 kB' 'Buffers: 4152 kB' 'Cached: 12270968 kB' 'SwapCached: 0 kB' 'Active: 9213856 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722424 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638348 kB' 'Mapped: 175160 kB' 'Shmem: 8087420 kB' 'KReclaimable: 544364 kB' 'Slab: 1414060 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869696 kB' 'KernelStack: 27776 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10321076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238284 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.756 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106427520 kB' 'MemAvailable: 110118640 kB' 'Buffers: 4152 kB' 'Cached: 12270972 kB' 'SwapCached: 0 kB' 'Active: 9213320 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721888 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637852 kB' 'Mapped: 175124 kB' 'Shmem: 8087424 kB' 'KReclaimable: 544364 kB' 'Slab: 1414060 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869696 kB' 'KernelStack: 27792 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10321092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238252 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106428012 kB' 'MemAvailable: 110119132 kB' 'Buffers: 4152 kB' 'Cached: 12270992 kB' 'SwapCached: 0 kB' 'Active: 9213172 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721740 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637132 kB' 'Mapped: 175124 kB' 'Shmem: 8087444 kB' 'KReclaimable: 544364 kB' 'Slab: 1414092 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869728 kB' 'KernelStack: 27776 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10321116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238252 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.759 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.760 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.761 nr_hugepages=1024 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.761 resv_hugepages=0 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.761 surplus_hugepages=0 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.761 anon_hugepages=0 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106431752 kB' 'MemAvailable: 110122872 kB' 'Buffers: 4152 kB' 'Cached: 12270992 kB' 'SwapCached: 0 kB' 'Active: 9213484 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722052 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637448 kB' 'Mapped: 175124 kB' 'Shmem: 8087444 kB' 'KReclaimable: 544364 kB' 'Slab: 1414092 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869728 kB' 'KernelStack: 27776 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10321136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238252 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.761 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.762 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59413868 kB' 'MemUsed: 6245140 kB' 'SwapCached: 0 kB' 'Active: 2345120 kB' 'Inactive: 283540 kB' 'Active(anon): 2187372 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2478360 kB' 'Mapped: 37916 kB' 'AnonPages: 153036 kB' 'Shmem: 2037072 kB' 'KernelStack: 13512 kB' 'PageTables: 3476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 298240 kB' 'Slab: 724464 kB' 'SReclaimable: 298240 kB' 'SUnreclaim: 426224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.763 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.764 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47022352 kB' 'MemUsed: 13657488 kB' 'SwapCached: 0 kB' 'Active: 6868096 kB' 'Inactive: 3412728 kB' 'Active(anon): 6534412 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9796828 kB' 'Mapped: 137208 kB' 'AnonPages: 484096 kB' 'Shmem: 6050416 kB' 'KernelStack: 14264 kB' 'PageTables: 5416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246124 kB' 'Slab: 689596 kB' 'SReclaimable: 246124 kB' 'SUnreclaim: 443472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.765 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.766 node0=512 expecting 512 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:09.766 node1=512 expecting 512 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:09.766 00:03:09.766 real 0m4.096s 00:03:09.766 user 0m1.681s 00:03:09.766 sys 0m2.486s 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.766 13:28:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:09.766 ************************************ 00:03:09.766 END TEST even_2G_alloc 00:03:09.766 ************************************ 00:03:09.766 13:28:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:09.766 13:28:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:09.766 13:28:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.766 13:28:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.766 13:28:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.766 ************************************ 00:03:09.766 START TEST odd_alloc 00:03:09.766 ************************************ 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.766 13:28:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:13.977 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:13.977 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.977 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106511456 kB' 'MemAvailable: 110202576 kB' 'Buffers: 4152 kB' 'Cached: 12271144 kB' 'SwapCached: 0 kB' 'Active: 9212640 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721208 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636436 kB' 'Mapped: 175500 kB' 'Shmem: 8087596 kB' 'KReclaimable: 544364 kB' 'Slab: 1414256 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869892 kB' 'KernelStack: 27760 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10323340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238204 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.978 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106514444 kB' 'MemAvailable: 110205564 kB' 'Buffers: 4152 kB' 'Cached: 12271144 kB' 'SwapCached: 0 kB' 'Active: 9214228 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722796 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638028 kB' 'Mapped: 175228 kB' 'Shmem: 8087596 kB' 'KReclaimable: 544364 kB' 'Slab: 1414304 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869940 kB' 'KernelStack: 27824 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10324848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238236 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.979 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.980 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106515720 kB' 'MemAvailable: 110206840 kB' 'Buffers: 4152 kB' 'Cached: 12271144 kB' 'SwapCached: 0 kB' 'Active: 9211720 kB' 'Inactive: 3696268 kB' 'Active(anon): 8720288 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635872 kB' 'Mapped: 175152 kB' 'Shmem: 8087596 kB' 'KReclaimable: 544364 kB' 'Slab: 1414336 kB' 'SReclaimable: 544364 kB' 'SUnreclaim: 869972 kB' 'KernelStack: 27680 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10323272 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238156 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.981 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.982 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:13.983 nr_hugepages=1025 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.983 resv_hugepages=0 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.983 surplus_hugepages=0 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.983 anon_hugepages=0 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106522744 kB' 'MemAvailable: 110213800 kB' 'Buffers: 4152 kB' 'Cached: 12271204 kB' 'SwapCached: 0 kB' 'Active: 9211984 kB' 'Inactive: 3696268 kB' 'Active(anon): 8720552 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636176 kB' 'Mapped: 175236 kB' 'Shmem: 8087656 kB' 'KReclaimable: 544300 kB' 'Slab: 1414272 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 869972 kB' 'KernelStack: 27808 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 10323664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238156 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.983 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.984 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59464928 kB' 'MemUsed: 6194080 kB' 'SwapCached: 0 kB' 'Active: 2344716 kB' 'Inactive: 283540 kB' 'Active(anon): 2186968 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2478424 kB' 'Mapped: 37936 kB' 'AnonPages: 152996 kB' 'Shmem: 2037136 kB' 'KernelStack: 13496 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 298176 kB' 'Slab: 724316 kB' 'SReclaimable: 298176 kB' 'SUnreclaim: 426140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.985 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47057388 kB' 'MemUsed: 13622452 kB' 'SwapCached: 0 kB' 'Active: 6867304 kB' 'Inactive: 3412728 kB' 'Active(anon): 6533620 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9796972 kB' 'Mapped: 137248 kB' 'AnonPages: 483124 kB' 'Shmem: 6050560 kB' 'KernelStack: 14200 kB' 'PageTables: 4824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246124 kB' 'Slab: 689956 kB' 'SReclaimable: 246124 kB' 'SUnreclaim: 443832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.986 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.987 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:13.988 node0=512 expecting 513 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:13.988 node1=513 expecting 512 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:13.988 00:03:13.988 real 0m3.943s 00:03:13.988 user 0m1.584s 00:03:13.988 sys 0m2.426s 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.988 13:29:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.988 ************************************ 00:03:13.988 END TEST odd_alloc 00:03:13.988 ************************************ 00:03:13.988 13:29:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.988 13:29:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:13.988 13:29:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.988 13:29:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.988 13:29:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.988 ************************************ 00:03:13.988 START TEST custom_alloc 00:03:13.988 ************************************ 00:03:13.988 13:29:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:13.988 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.989 13:29:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:17.286 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:17.286 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:17.286 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:17.552 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105534840 kB' 'MemAvailable: 109225896 kB' 'Buffers: 4152 kB' 'Cached: 12271332 kB' 'SwapCached: 0 kB' 'Active: 9214164 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722732 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637700 kB' 'Mapped: 175220 kB' 'Shmem: 8087784 kB' 'KReclaimable: 544300 kB' 'Slab: 1414588 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 870288 kB' 'KernelStack: 27984 kB' 'PageTables: 9620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10326156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238508 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.552 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.553 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105535328 kB' 'MemAvailable: 109226384 kB' 'Buffers: 4152 kB' 'Cached: 12271336 kB' 'SwapCached: 0 kB' 'Active: 9214100 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722668 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638112 kB' 'Mapped: 175300 kB' 'Shmem: 8087788 kB' 'KReclaimable: 544300 kB' 'Slab: 1414648 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 870348 kB' 'KernelStack: 27984 kB' 'PageTables: 9480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10326172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238492 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.554 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.555 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105536220 kB' 'MemAvailable: 109227276 kB' 'Buffers: 4152 kB' 'Cached: 12271356 kB' 'SwapCached: 0 kB' 'Active: 9213772 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722340 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637500 kB' 'Mapped: 175212 kB' 'Shmem: 8087808 kB' 'KReclaimable: 544300 kB' 'Slab: 1414708 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 870408 kB' 'KernelStack: 27968 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10326192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238460 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.556 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.557 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:17.558 nr_hugepages=1536 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.558 resv_hugepages=0 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.558 surplus_hugepages=0 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.558 anon_hugepages=0 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105537712 kB' 'MemAvailable: 109228768 kB' 'Buffers: 4152 kB' 'Cached: 12271376 kB' 'SwapCached: 0 kB' 'Active: 9213088 kB' 'Inactive: 3696268 kB' 'Active(anon): 8721656 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637056 kB' 'Mapped: 175212 kB' 'Shmem: 8087828 kB' 'KReclaimable: 544300 kB' 'Slab: 1414708 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 870408 kB' 'KernelStack: 27808 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 10323384 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238364 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.558 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.559 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59500468 kB' 'MemUsed: 6158540 kB' 'SwapCached: 0 kB' 'Active: 2346692 kB' 'Inactive: 283540 kB' 'Active(anon): 2188944 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2478472 kB' 'Mapped: 37952 kB' 'AnonPages: 154964 kB' 'Shmem: 2037184 kB' 'KernelStack: 13496 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 298176 kB' 'Slab: 724168 kB' 'SReclaimable: 298176 kB' 'SUnreclaim: 425992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.560 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.561 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 46037312 kB' 'MemUsed: 14642528 kB' 'SwapCached: 0 kB' 'Active: 6866480 kB' 'Inactive: 3412728 kB' 'Active(anon): 6532796 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9797096 kB' 'Mapped: 137224 kB' 'AnonPages: 482212 kB' 'Shmem: 6050684 kB' 'KernelStack: 14232 kB' 'PageTables: 5328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 246124 kB' 'Slab: 690572 kB' 'SReclaimable: 246124 kB' 'SUnreclaim: 444448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.562 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.823 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.824 node0=512 expecting 512 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:17.824 node1=1024 expecting 1024 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:17.824 00:03:17.824 real 0m4.035s 00:03:17.824 user 0m1.590s 00:03:17.824 sys 0m2.512s 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.824 13:29:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.824 ************************************ 00:03:17.824 END TEST custom_alloc 00:03:17.824 ************************************ 00:03:17.824 13:29:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.824 13:29:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:17.824 13:29:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.824 13:29:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.824 13:29:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.824 ************************************ 00:03:17.824 START TEST no_shrink_alloc 00:03:17.824 ************************************ 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.824 13:29:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:21.125 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:21.125 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106565140 kB' 'MemAvailable: 110256196 kB' 'Buffers: 4152 kB' 'Cached: 12271508 kB' 'SwapCached: 0 kB' 'Active: 9214888 kB' 'Inactive: 3696268 kB' 'Active(anon): 8723456 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638288 kB' 'Mapped: 175224 kB' 'Shmem: 8087960 kB' 'KReclaimable: 544300 kB' 'Slab: 1414020 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 869720 kB' 'KernelStack: 27776 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10324436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238364 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.126 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106564996 kB' 'MemAvailable: 110256052 kB' 'Buffers: 4152 kB' 'Cached: 12271512 kB' 'SwapCached: 0 kB' 'Active: 9215156 kB' 'Inactive: 3696268 kB' 'Active(anon): 8723724 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638592 kB' 'Mapped: 175300 kB' 'Shmem: 8087964 kB' 'KReclaimable: 544300 kB' 'Slab: 1414020 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 869720 kB' 'KernelStack: 27744 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10324456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238316 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106564348 kB' 'MemAvailable: 110255404 kB' 'Buffers: 4152 kB' 'Cached: 12271548 kB' 'SwapCached: 0 kB' 'Active: 9213820 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722388 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637596 kB' 'Mapped: 175200 kB' 'Shmem: 8088000 kB' 'KReclaimable: 544300 kB' 'Slab: 1414020 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 869720 kB' 'KernelStack: 27712 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10324476 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238316 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.392 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.393 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.394 nr_hugepages=1024 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.394 resv_hugepages=0 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.394 surplus_hugepages=0 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.394 anon_hugepages=0 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106564668 kB' 'MemAvailable: 110255724 kB' 'Buffers: 4152 kB' 'Cached: 12271548 kB' 'SwapCached: 0 kB' 'Active: 9213976 kB' 'Inactive: 3696268 kB' 'Active(anon): 8722544 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637752 kB' 'Mapped: 175200 kB' 'Shmem: 8088000 kB' 'KReclaimable: 544300 kB' 'Slab: 1414020 kB' 'SReclaimable: 544300 kB' 'SUnreclaim: 869720 kB' 'KernelStack: 27696 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10324500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238332 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.394 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.395 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.396 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58456524 kB' 'MemUsed: 7202484 kB' 'SwapCached: 0 kB' 'Active: 2347480 kB' 'Inactive: 283540 kB' 'Active(anon): 2189732 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2478524 kB' 'Mapped: 37968 kB' 'AnonPages: 155652 kB' 'Shmem: 2037236 kB' 'KernelStack: 13480 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 298176 kB' 'Slab: 723692 kB' 'SReclaimable: 298176 kB' 'SUnreclaim: 425516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.397 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.398 node0=1024 expecting 1024 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.398 13:29:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:25.609 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:25.609 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:25.609 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106591608 kB' 'MemAvailable: 110282632 kB' 'Buffers: 4152 kB' 'Cached: 12271664 kB' 'SwapCached: 0 kB' 'Active: 9215860 kB' 'Inactive: 3696268 kB' 'Active(anon): 8724428 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639028 kB' 'Mapped: 175312 kB' 'Shmem: 8088116 kB' 'KReclaimable: 544268 kB' 'Slab: 1413784 kB' 'SReclaimable: 544268 kB' 'SUnreclaim: 869516 kB' 'KernelStack: 27728 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10325248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238364 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.609 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.610 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106592008 kB' 'MemAvailable: 110283032 kB' 'Buffers: 4152 kB' 'Cached: 12271668 kB' 'SwapCached: 0 kB' 'Active: 9215488 kB' 'Inactive: 3696268 kB' 'Active(anon): 8724056 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638688 kB' 'Mapped: 175304 kB' 'Shmem: 8088120 kB' 'KReclaimable: 544268 kB' 'Slab: 1413784 kB' 'SReclaimable: 544268 kB' 'SUnreclaim: 869516 kB' 'KernelStack: 27728 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10341140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238364 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.611 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106597252 kB' 'MemAvailable: 110288276 kB' 'Buffers: 4152 kB' 'Cached: 12271668 kB' 'SwapCached: 0 kB' 'Active: 9214688 kB' 'Inactive: 3696268 kB' 'Active(anon): 8723256 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637908 kB' 'Mapped: 175228 kB' 'Shmem: 8088120 kB' 'KReclaimable: 544268 kB' 'Slab: 1413816 kB' 'SReclaimable: 544268 kB' 'SUnreclaim: 869548 kB' 'KernelStack: 27728 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10324920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238348 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.615 nr_hugepages=1024 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.615 resv_hugepages=0 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.615 surplus_hugepages=0 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.615 anon_hugepages=0 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106596936 kB' 'MemAvailable: 110287960 kB' 'Buffers: 4152 kB' 'Cached: 12271708 kB' 'SwapCached: 0 kB' 'Active: 9214452 kB' 'Inactive: 3696268 kB' 'Active(anon): 8723020 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3696268 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638088 kB' 'Mapped: 175228 kB' 'Shmem: 8088160 kB' 'KReclaimable: 544268 kB' 'Slab: 1413816 kB' 'SReclaimable: 544268 kB' 'SUnreclaim: 869548 kB' 'KernelStack: 27680 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 10324948 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238316 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4015476 kB' 'DirectMap2M: 57530368 kB' 'DirectMap1G: 74448896 kB' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58481868 kB' 'MemUsed: 7177140 kB' 'SwapCached: 0 kB' 'Active: 2348224 kB' 'Inactive: 283540 kB' 'Active(anon): 2190476 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 283540 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2478576 kB' 'Mapped: 37980 kB' 'AnonPages: 156340 kB' 'Shmem: 2037288 kB' 'KernelStack: 13448 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 298144 kB' 'Slab: 723644 kB' 'SReclaimable: 298144 kB' 'SUnreclaim: 425500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.617 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:25.618 node0=1024 expecting 1024 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:25.618 00:03:25.618 real 0m7.509s 00:03:25.618 user 0m2.822s 00:03:25.618 sys 0m4.734s 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.618 13:29:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.618 ************************************ 00:03:25.618 END TEST no_shrink_alloc 00:03:25.618 ************************************ 00:03:25.618 13:29:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.618 13:29:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.618 00:03:25.618 real 0m28.432s 00:03:25.618 user 0m11.187s 00:03:25.618 sys 0m17.603s 00:03:25.618 13:29:13 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.618 13:29:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.618 ************************************ 00:03:25.618 END TEST hugepages 00:03:25.618 ************************************ 00:03:25.618 13:29:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:25.618 13:29:13 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:25.618 13:29:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.618 13:29:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.618 13:29:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.618 ************************************ 00:03:25.618 START TEST driver 00:03:25.618 ************************************ 00:03:25.618 13:29:13 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:25.618 * Looking for test storage... 00:03:25.618 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:25.618 13:29:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:25.618 13:29:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.618 13:29:13 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.985 13:29:19 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:30.985 13:29:19 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.985 13:29:19 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.985 13:29:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:30.985 ************************************ 00:03:30.985 START TEST guess_driver 00:03:30.985 ************************************ 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:30.985 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:30.985 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:30.985 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:30.985 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:30.985 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:30.985 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:30.985 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:30.985 Looking for driver=vfio-pci 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.985 13:29:19 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.284 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.285 13:29:22 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.572 00:03:39.572 real 0m8.704s 00:03:39.572 user 0m2.773s 00:03:39.572 sys 0m5.108s 00:03:39.572 13:29:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.572 13:29:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.572 ************************************ 00:03:39.572 END TEST guess_driver 00:03:39.572 ************************************ 00:03:39.572 13:29:27 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:39.572 00:03:39.572 real 0m13.928s 00:03:39.572 user 0m4.315s 00:03:39.572 sys 0m8.040s 00:03:39.572 13:29:27 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.572 13:29:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.572 ************************************ 00:03:39.572 END TEST driver 00:03:39.572 ************************************ 00:03:39.572 13:29:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:39.572 13:29:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:39.572 13:29:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.572 13:29:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.572 13:29:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.572 ************************************ 00:03:39.572 START TEST devices 00:03:39.572 ************************************ 00:03:39.572 13:29:27 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:39.572 * Looking for test storage... 00:03:39.572 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:39.572 13:29:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:39.572 13:29:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:39.572 13:29:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.572 13:29:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:43.774 13:29:32 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:43.774 No valid GPT data, bailing 00:03:43.774 13:29:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:43.774 13:29:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:43.774 13:29:32 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:43.774 13:29:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.774 13:29:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:43.774 ************************************ 00:03:43.774 START TEST nvme_mount 00:03:43.774 ************************************ 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.774 13:29:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:45.155 Creating new GPT entries in memory. 00:03:45.155 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.155 other utilities. 00:03:45.155 13:29:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.155 13:29:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.155 13:29:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.155 13:29:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.155 13:29:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.094 Creating new GPT entries in memory. 00:03:46.094 The operation has completed successfully. 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2396608 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.094 13:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.392 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.392 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:49.652 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:49.653 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:03:49.653 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:49.653 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:49.653 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:49.653 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:49.653 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.653 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:49.653 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.913 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.214 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.475 13:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.475 13:29:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.681 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.681 00:03:57.681 real 0m13.644s 00:03:57.681 user 0m4.142s 00:03:57.681 sys 0m7.348s 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.681 13:29:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:57.681 ************************************ 00:03:57.681 END TEST nvme_mount 00:03:57.681 ************************************ 00:03:57.681 13:29:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:57.681 13:29:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:57.681 13:29:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.681 13:29:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.681 13:29:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.681 ************************************ 00:03:57.681 START TEST dm_mount 00:03:57.681 ************************************ 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.681 13:29:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:58.621 Creating new GPT entries in memory. 00:03:58.621 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.621 other utilities. 00:03:58.621 13:29:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.621 13:29:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.621 13:29:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.621 13:29:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.621 13:29:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.607 Creating new GPT entries in memory. 00:03:59.608 The operation has completed successfully. 00:03:59.608 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.608 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.608 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.608 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.608 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:00.549 The operation has completed successfully. 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2402156 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.550 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.810 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.811 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:00.811 13:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.811 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.811 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.011 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.012 13:29:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:08.309 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:08.309 00:04:08.309 real 0m10.809s 00:04:08.309 user 0m2.847s 00:04:08.309 sys 0m5.039s 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.309 13:29:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:08.309 ************************************ 00:04:08.309 END TEST dm_mount 00:04:08.309 ************************************ 00:04:08.309 13:29:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:08.309 13:29:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:08.309 13:29:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:08.309 13:29:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.309 13:29:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.309 13:29:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.309 13:29:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.309 13:29:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.569 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:08.569 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:08.569 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.569 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.569 13:29:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:08.569 13:29:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:08.569 13:29:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.569 13:29:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.569 13:29:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.569 13:29:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.569 13:29:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:08.569 00:04:08.569 real 0m29.280s 00:04:08.569 user 0m8.673s 00:04:08.569 sys 0m15.415s 00:04:08.569 13:29:57 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.569 13:29:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:08.569 ************************************ 00:04:08.569 END TEST devices 00:04:08.569 ************************************ 00:04:08.829 13:29:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.829 00:04:08.829 real 1m38.603s 00:04:08.829 user 0m32.994s 00:04:08.829 sys 0m56.940s 00:04:08.829 13:29:57 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.829 13:29:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.829 ************************************ 00:04:08.829 END TEST setup.sh 00:04:08.829 ************************************ 00:04:08.829 13:29:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:08.829 13:29:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:13.036 Hugepages 00:04:13.036 node hugesize free / total 00:04:13.036 node0 1048576kB 0 / 0 00:04:13.036 node0 2048kB 2048 / 2048 00:04:13.036 node1 1048576kB 0 / 0 00:04:13.036 node1 2048kB 0 / 0 00:04:13.036 00:04:13.036 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.036 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:13.036 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:13.036 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:13.036 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:13.036 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:13.036 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:13.036 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:13.036 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:13.037 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:13.037 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:13.037 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:13.037 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:13.037 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:13.037 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:13.037 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:13.037 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:13.037 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:13.037 13:30:01 -- spdk/autotest.sh@130 -- # uname -s 00:04:13.037 13:30:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:13.037 13:30:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:13.037 13:30:01 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:16.341 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:16.341 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:16.603 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:18.519 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:18.520 13:30:06 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:19.546 13:30:07 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:19.546 13:30:07 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:19.546 13:30:07 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.546 13:30:07 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:19.546 13:30:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:19.546 13:30:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:19.546 13:30:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.546 13:30:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.546 13:30:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:19.546 13:30:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:19.546 13:30:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:19.546 13:30:07 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.757 Waiting for block devices as requested 00:04:23.757 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:23.757 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:23.757 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:23.757 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:23.757 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:23.757 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:23.757 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:23.757 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:24.016 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:24.016 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:24.016 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:24.276 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:24.276 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:24.276 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:24.276 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:24.536 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:24.536 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:24.536 13:30:13 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:24.536 13:30:13 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:24.536 13:30:13 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:24.536 13:30:13 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:24.536 13:30:13 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:24.536 13:30:13 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:24.536 13:30:13 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:24.536 13:30:13 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:24.536 13:30:13 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:24.536 13:30:13 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:24.536 13:30:13 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:24.536 13:30:13 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:24.536 13:30:13 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:24.536 13:30:13 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:24.536 13:30:13 -- common/autotest_common.sh@1557 -- # continue 00:04:24.536 13:30:13 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:24.536 13:30:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.536 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:04:24.536 13:30:13 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:24.536 13:30:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.536 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:04:24.536 13:30:13 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:28.736 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:28.736 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:28.736 13:30:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:28.736 13:30:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.736 13:30:17 -- common/autotest_common.sh@10 -- # set +x 00:04:28.736 13:30:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:28.736 13:30:17 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:28.736 13:30:17 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:28.736 13:30:17 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:28.736 13:30:17 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:28.736 13:30:17 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:28.736 13:30:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:28.736 13:30:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:28.736 13:30:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.736 13:30:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:28.736 13:30:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:28.736 13:30:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:28.736 13:30:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:28.736 13:30:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:28.736 13:30:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:28.736 13:30:17 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:28.736 13:30:17 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:28.736 13:30:17 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:28.736 13:30:17 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:28.736 13:30:17 -- common/autotest_common.sh@1593 -- # return 0 00:04:28.736 13:30:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:28.736 13:30:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:28.736 13:30:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:28.737 13:30:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:28.737 13:30:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:28.737 13:30:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.737 13:30:17 -- common/autotest_common.sh@10 -- # set +x 00:04:28.737 13:30:17 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:28.737 13:30:17 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:28.737 13:30:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.737 13:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.737 13:30:17 -- common/autotest_common.sh@10 -- # set +x 00:04:28.737 ************************************ 00:04:28.737 START TEST env 00:04:28.737 ************************************ 00:04:28.737 13:30:17 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:28.737 * Looking for test storage... 00:04:28.737 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:28.737 13:30:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.737 13:30:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.737 13:30:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.737 13:30:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.998 ************************************ 00:04:28.998 START TEST env_memory 00:04:28.998 ************************************ 00:04:28.998 13:30:17 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.998 00:04:28.998 00:04:28.998 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.998 http://cunit.sourceforge.net/ 00:04:28.998 00:04:28.998 00:04:28.998 Suite: memory 00:04:28.998 Test: alloc and free memory map ...[2024-07-12 13:30:17.363753] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.998 passed 00:04:28.998 Test: mem map translation ...[2024-07-12 13:30:17.378319] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.998 [2024-07-12 13:30:17.378343] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.998 [2024-07-12 13:30:17.378382] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.998 [2024-07-12 13:30:17.378389] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.998 passed 00:04:28.998 Test: mem map registration ...[2024-07-12 13:30:17.404212] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:28.998 [2024-07-12 13:30:17.404233] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:28.998 passed 00:04:28.998 Test: mem map adjacent registrations ...passed 00:04:28.998 00:04:28.998 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.998 suites 1 1 n/a 0 0 00:04:28.998 tests 4 4 4 0 0 00:04:28.998 asserts 152 152 152 0 n/a 00:04:28.998 00:04:28.998 Elapsed time = 0.102 seconds 00:04:28.998 00:04:28.998 real 0m0.113s 00:04:28.998 user 0m0.100s 00:04:28.998 sys 0m0.013s 00:04:28.998 13:30:17 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.998 13:30:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.998 ************************************ 00:04:28.998 END TEST env_memory 00:04:28.998 ************************************ 00:04:28.998 13:30:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:28.998 13:30:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.998 13:30:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.998 13:30:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.998 13:30:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.998 ************************************ 00:04:28.998 START TEST env_vtophys 00:04:28.998 ************************************ 00:04:28.998 13:30:17 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.998 EAL: lib.eal log level changed from notice to debug 00:04:28.998 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.998 EAL: Detected lcore 1 as core 1 on socket 0 00:04:28.998 EAL: Detected lcore 2 as core 2 on socket 0 00:04:28.998 EAL: Detected lcore 3 as core 3 on socket 0 00:04:28.998 EAL: Detected lcore 4 as core 4 on socket 0 00:04:28.998 EAL: Detected lcore 5 as core 5 on socket 0 00:04:28.998 EAL: Detected lcore 6 as core 6 on socket 0 00:04:28.998 EAL: Detected lcore 7 as core 7 on socket 0 00:04:28.998 EAL: Detected lcore 8 as core 8 on socket 0 00:04:28.998 EAL: Detected lcore 9 as core 9 on socket 0 00:04:28.998 EAL: Detected lcore 10 as core 10 on socket 0 00:04:28.998 EAL: Detected lcore 11 as core 11 on socket 0 00:04:28.998 EAL: Detected lcore 12 as core 12 on socket 0 00:04:28.998 EAL: Detected lcore 13 as core 13 on socket 0 00:04:28.998 EAL: Detected lcore 14 as core 14 on socket 0 00:04:28.998 EAL: Detected lcore 15 as core 15 on socket 0 00:04:28.998 EAL: Detected lcore 16 as core 16 on socket 0 00:04:28.998 EAL: Detected lcore 17 as core 17 on socket 0 00:04:28.998 EAL: Detected lcore 18 as core 18 on socket 0 00:04:28.998 EAL: Detected lcore 19 as core 19 on socket 0 00:04:28.998 EAL: Detected lcore 20 as core 20 on socket 0 00:04:28.998 EAL: Detected lcore 21 as core 21 on socket 0 00:04:28.998 EAL: Detected lcore 22 as core 22 on socket 0 00:04:28.998 EAL: Detected lcore 23 as core 23 on socket 0 00:04:28.998 EAL: Detected lcore 24 as core 24 on socket 0 00:04:28.998 EAL: Detected lcore 25 as core 25 on socket 0 00:04:28.998 EAL: Detected lcore 26 as core 26 on socket 0 00:04:28.998 EAL: Detected lcore 27 as core 27 on socket 0 00:04:28.998 EAL: Detected lcore 28 as core 28 on socket 0 00:04:28.998 EAL: Detected lcore 29 as core 29 on socket 0 00:04:28.998 EAL: Detected lcore 30 as core 30 on socket 0 00:04:28.998 EAL: Detected lcore 31 as core 31 on socket 0 00:04:28.998 EAL: Detected lcore 32 as core 32 on socket 0 00:04:28.998 EAL: Detected lcore 33 as core 33 on socket 0 00:04:28.998 EAL: Detected lcore 34 as core 34 on socket 0 00:04:28.998 EAL: Detected lcore 35 as core 35 on socket 0 00:04:28.998 EAL: Detected lcore 36 as core 0 on socket 1 00:04:28.998 EAL: Detected lcore 37 as core 1 on socket 1 00:04:28.998 EAL: Detected lcore 38 as core 2 on socket 1 00:04:28.998 EAL: Detected lcore 39 as core 3 on socket 1 00:04:28.998 EAL: Detected lcore 40 as core 4 on socket 1 00:04:28.998 EAL: Detected lcore 41 as core 5 on socket 1 00:04:28.998 EAL: Detected lcore 42 as core 6 on socket 1 00:04:28.998 EAL: Detected lcore 43 as core 7 on socket 1 00:04:28.998 EAL: Detected lcore 44 as core 8 on socket 1 00:04:28.998 EAL: Detected lcore 45 as core 9 on socket 1 00:04:28.998 EAL: Detected lcore 46 as core 10 on socket 1 00:04:28.998 EAL: Detected lcore 47 as core 11 on socket 1 00:04:28.998 EAL: Detected lcore 48 as core 12 on socket 1 00:04:28.998 EAL: Detected lcore 49 as core 13 on socket 1 00:04:28.998 EAL: Detected lcore 50 as core 14 on socket 1 00:04:28.998 EAL: Detected lcore 51 as core 15 on socket 1 00:04:28.998 EAL: Detected lcore 52 as core 16 on socket 1 00:04:28.998 EAL: Detected lcore 53 as core 17 on socket 1 00:04:28.998 EAL: Detected lcore 54 as core 18 on socket 1 00:04:28.998 EAL: Detected lcore 55 as core 19 on socket 1 00:04:28.998 EAL: Detected lcore 56 as core 20 on socket 1 00:04:28.998 EAL: Detected lcore 57 as core 21 on socket 1 00:04:28.998 EAL: Detected lcore 58 as core 22 on socket 1 00:04:28.998 EAL: Detected lcore 59 as core 23 on socket 1 00:04:28.998 EAL: Detected lcore 60 as core 24 on socket 1 00:04:28.998 EAL: Detected lcore 61 as core 25 on socket 1 00:04:28.998 EAL: Detected lcore 62 as core 26 on socket 1 00:04:28.998 EAL: Detected lcore 63 as core 27 on socket 1 00:04:28.998 EAL: Detected lcore 64 as core 28 on socket 1 00:04:28.998 EAL: Detected lcore 65 as core 29 on socket 1 00:04:28.998 EAL: Detected lcore 66 as core 30 on socket 1 00:04:28.998 EAL: Detected lcore 67 as core 31 on socket 1 00:04:28.998 EAL: Detected lcore 68 as core 32 on socket 1 00:04:28.998 EAL: Detected lcore 69 as core 33 on socket 1 00:04:28.998 EAL: Detected lcore 70 as core 34 on socket 1 00:04:28.998 EAL: Detected lcore 71 as core 35 on socket 1 00:04:28.998 EAL: Detected lcore 72 as core 0 on socket 0 00:04:28.998 EAL: Detected lcore 73 as core 1 on socket 0 00:04:28.998 EAL: Detected lcore 74 as core 2 on socket 0 00:04:28.998 EAL: Detected lcore 75 as core 3 on socket 0 00:04:28.998 EAL: Detected lcore 76 as core 4 on socket 0 00:04:28.998 EAL: Detected lcore 77 as core 5 on socket 0 00:04:28.998 EAL: Detected lcore 78 as core 6 on socket 0 00:04:28.998 EAL: Detected lcore 79 as core 7 on socket 0 00:04:28.998 EAL: Detected lcore 80 as core 8 on socket 0 00:04:28.998 EAL: Detected lcore 81 as core 9 on socket 0 00:04:28.998 EAL: Detected lcore 82 as core 10 on socket 0 00:04:28.998 EAL: Detected lcore 83 as core 11 on socket 0 00:04:28.998 EAL: Detected lcore 84 as core 12 on socket 0 00:04:28.998 EAL: Detected lcore 85 as core 13 on socket 0 00:04:28.998 EAL: Detected lcore 86 as core 14 on socket 0 00:04:28.998 EAL: Detected lcore 87 as core 15 on socket 0 00:04:28.998 EAL: Detected lcore 88 as core 16 on socket 0 00:04:28.998 EAL: Detected lcore 89 as core 17 on socket 0 00:04:28.998 EAL: Detected lcore 90 as core 18 on socket 0 00:04:28.998 EAL: Detected lcore 91 as core 19 on socket 0 00:04:28.998 EAL: Detected lcore 92 as core 20 on socket 0 00:04:28.998 EAL: Detected lcore 93 as core 21 on socket 0 00:04:28.998 EAL: Detected lcore 94 as core 22 on socket 0 00:04:28.998 EAL: Detected lcore 95 as core 23 on socket 0 00:04:28.998 EAL: Detected lcore 96 as core 24 on socket 0 00:04:28.998 EAL: Detected lcore 97 as core 25 on socket 0 00:04:28.998 EAL: Detected lcore 98 as core 26 on socket 0 00:04:28.998 EAL: Detected lcore 99 as core 27 on socket 0 00:04:28.998 EAL: Detected lcore 100 as core 28 on socket 0 00:04:28.998 EAL: Detected lcore 101 as core 29 on socket 0 00:04:28.998 EAL: Detected lcore 102 as core 30 on socket 0 00:04:28.998 EAL: Detected lcore 103 as core 31 on socket 0 00:04:28.998 EAL: Detected lcore 104 as core 32 on socket 0 00:04:28.998 EAL: Detected lcore 105 as core 33 on socket 0 00:04:28.998 EAL: Detected lcore 106 as core 34 on socket 0 00:04:28.998 EAL: Detected lcore 107 as core 35 on socket 0 00:04:28.998 EAL: Detected lcore 108 as core 0 on socket 1 00:04:28.998 EAL: Detected lcore 109 as core 1 on socket 1 00:04:28.998 EAL: Detected lcore 110 as core 2 on socket 1 00:04:28.998 EAL: Detected lcore 111 as core 3 on socket 1 00:04:28.999 EAL: Detected lcore 112 as core 4 on socket 1 00:04:28.999 EAL: Detected lcore 113 as core 5 on socket 1 00:04:28.999 EAL: Detected lcore 114 as core 6 on socket 1 00:04:28.999 EAL: Detected lcore 115 as core 7 on socket 1 00:04:28.999 EAL: Detected lcore 116 as core 8 on socket 1 00:04:28.999 EAL: Detected lcore 117 as core 9 on socket 1 00:04:28.999 EAL: Detected lcore 118 as core 10 on socket 1 00:04:28.999 EAL: Detected lcore 119 as core 11 on socket 1 00:04:28.999 EAL: Detected lcore 120 as core 12 on socket 1 00:04:28.999 EAL: Detected lcore 121 as core 13 on socket 1 00:04:28.999 EAL: Detected lcore 122 as core 14 on socket 1 00:04:28.999 EAL: Detected lcore 123 as core 15 on socket 1 00:04:28.999 EAL: Detected lcore 124 as core 16 on socket 1 00:04:28.999 EAL: Detected lcore 125 as core 17 on socket 1 00:04:28.999 EAL: Detected lcore 126 as core 18 on socket 1 00:04:28.999 EAL: Detected lcore 127 as core 19 on socket 1 00:04:28.999 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:28.999 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:28.999 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:28.999 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:28.999 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:28.999 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:28.999 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:28.999 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:28.999 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:28.999 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:28.999 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:28.999 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:28.999 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:28.999 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:28.999 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:28.999 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:28.999 EAL: Maximum logical cores by configuration: 128 00:04:28.999 EAL: Detected CPU lcores: 128 00:04:28.999 EAL: Detected NUMA nodes: 2 00:04:28.999 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.999 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:28.999 EAL: Checking presence of .so 'librte_eal.so' 00:04:28.999 EAL: Detected static linkage of DPDK 00:04:28.999 EAL: No shared files mode enabled, IPC will be disabled 00:04:29.259 EAL: Bus pci wants IOVA as 'DC' 00:04:29.259 EAL: Buses did not request a specific IOVA mode. 00:04:29.259 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:29.259 EAL: Selected IOVA mode 'VA' 00:04:29.259 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.259 EAL: Probing VFIO support... 00:04:29.259 EAL: IOMMU type 1 (Type 1) is supported 00:04:29.259 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:29.259 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:29.259 EAL: VFIO support initialized 00:04:29.259 EAL: Ask a virtual area of 0x2e000 bytes 00:04:29.259 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:29.259 EAL: Setting up physically contiguous memory... 00:04:29.259 EAL: Setting maximum number of open files to 524288 00:04:29.259 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:29.259 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:29.259 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:29.259 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.259 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:29.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:29.260 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.260 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:29.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:29.260 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.260 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:29.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:29.260 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.260 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:29.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:29.260 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:29.260 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.260 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:29.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:29.260 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.260 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:29.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:29.260 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.260 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:29.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:29.260 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.260 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:29.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.260 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.260 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:29.260 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:29.260 EAL: Hugepages will be freed exactly as allocated. 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: TSC frequency is ~2400000 KHz 00:04:29.260 EAL: Main lcore 0 is ready (tid=7f4bcd206a00;cpuset=[0]) 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 0 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 2MB 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Mem event callback 'spdk:(nil)' registered 00:04:29.260 00:04:29.260 00:04:29.260 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.260 http://cunit.sourceforge.net/ 00:04:29.260 00:04:29.260 00:04:29.260 Suite: components_suite 00:04:29.260 Test: vtophys_malloc_test ...passed 00:04:29.260 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 4MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 4MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 6MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 6MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 10MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 10MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 18MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 18MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 34MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 34MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 66MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 66MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 130MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 130MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.260 EAL: Restoring previous memory policy: 4 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was expanded by 258MB 00:04:29.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.260 EAL: request: mp_malloc_sync 00:04:29.260 EAL: No shared files mode enabled, IPC is disabled 00:04:29.260 EAL: Heap on socket 0 was shrunk by 258MB 00:04:29.260 EAL: Trying to obtain current memory policy. 00:04:29.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.520 EAL: Restoring previous memory policy: 4 00:04:29.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.520 EAL: request: mp_malloc_sync 00:04:29.520 EAL: No shared files mode enabled, IPC is disabled 00:04:29.520 EAL: Heap on socket 0 was expanded by 514MB 00:04:29.520 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.520 EAL: request: mp_malloc_sync 00:04:29.520 EAL: No shared files mode enabled, IPC is disabled 00:04:29.520 EAL: Heap on socket 0 was shrunk by 514MB 00:04:29.520 EAL: Trying to obtain current memory policy. 00:04:29.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.780 EAL: Restoring previous memory policy: 4 00:04:29.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.780 EAL: request: mp_malloc_sync 00:04:29.780 EAL: No shared files mode enabled, IPC is disabled 00:04:29.780 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.780 EAL: request: mp_malloc_sync 00:04:29.780 EAL: No shared files mode enabled, IPC is disabled 00:04:29.780 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.780 passed 00:04:29.780 00:04:29.780 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.780 suites 1 1 n/a 0 0 00:04:29.780 tests 2 2 2 0 0 00:04:29.780 asserts 497 497 497 0 n/a 00:04:29.780 00:04:29.780 Elapsed time = 0.661 seconds 00:04:29.780 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.780 EAL: request: mp_malloc_sync 00:04:29.780 EAL: No shared files mode enabled, IPC is disabled 00:04:29.780 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.780 EAL: No shared files mode enabled, IPC is disabled 00:04:29.780 EAL: No shared files mode enabled, IPC is disabled 00:04:29.780 EAL: No shared files mode enabled, IPC is disabled 00:04:29.780 00:04:29.780 real 0m0.798s 00:04:29.780 user 0m0.413s 00:04:29.780 sys 0m0.361s 00:04:29.780 13:30:18 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.780 13:30:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.780 ************************************ 00:04:29.780 END TEST env_vtophys 00:04:29.780 ************************************ 00:04:29.780 13:30:18 env -- common/autotest_common.sh@1142 -- # return 0 00:04:29.780 13:30:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.780 13:30:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.780 13:30:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.780 13:30:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.039 ************************************ 00:04:30.039 START TEST env_pci 00:04:30.039 ************************************ 00:04:30.039 13:30:18 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.039 00:04:30.039 00:04:30.039 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.039 http://cunit.sourceforge.net/ 00:04:30.039 00:04:30.039 00:04:30.039 Suite: pci 00:04:30.040 Test: pci_hook ...[2024-07-12 13:30:18.401212] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2414829 has claimed it 00:04:30.040 EAL: Cannot find device (10000:00:01.0) 00:04:30.040 EAL: Failed to attach device on primary process 00:04:30.040 passed 00:04:30.040 00:04:30.040 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.040 suites 1 1 n/a 0 0 00:04:30.040 tests 1 1 1 0 0 00:04:30.040 asserts 25 25 25 0 n/a 00:04:30.040 00:04:30.040 Elapsed time = 0.035 seconds 00:04:30.040 00:04:30.040 real 0m0.053s 00:04:30.040 user 0m0.015s 00:04:30.040 sys 0m0.038s 00:04:30.040 13:30:18 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.040 13:30:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:30.040 ************************************ 00:04:30.040 END TEST env_pci 00:04:30.040 ************************************ 00:04:30.040 13:30:18 env -- common/autotest_common.sh@1142 -- # return 0 00:04:30.040 13:30:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:30.040 13:30:18 env -- env/env.sh@15 -- # uname 00:04:30.040 13:30:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:30.040 13:30:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:30.040 13:30:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.040 13:30:18 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:30.040 13:30:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.040 13:30:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.040 ************************************ 00:04:30.040 START TEST env_dpdk_post_init 00:04:30.040 ************************************ 00:04:30.040 13:30:18 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.040 EAL: Detected CPU lcores: 128 00:04:30.040 EAL: Detected NUMA nodes: 2 00:04:30.040 EAL: Detected static linkage of DPDK 00:04:30.040 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.040 EAL: Selected IOVA mode 'VA' 00:04:30.040 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.040 EAL: VFIO support initialized 00:04:30.040 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.300 EAL: Using IOMMU type 1 (Type 1) 00:04:30.300 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:30.300 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:30.300 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001000000 00:04:30.560 Starting DPDK initialization... 00:04:30.560 Starting SPDK post initialization... 00:04:30.560 SPDK NVMe probe 00:04:30.560 Attaching to 0000:65:00.0 00:04:30.560 Attached to 0000:65:00.0 00:04:30.560 Cleaning up... 00:04:30.560 00:04:30.560 real 0m0.467s 00:04:30.560 user 0m0.162s 00:04:30.560 sys 0m0.081s 00:04:30.560 13:30:18 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.561 13:30:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.561 ************************************ 00:04:30.561 END TEST env_dpdk_post_init 00:04:30.561 ************************************ 00:04:30.561 13:30:19 env -- common/autotest_common.sh@1142 -- # return 0 00:04:30.561 13:30:19 env -- env/env.sh@26 -- # uname 00:04:30.561 13:30:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:30.561 13:30:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.561 13:30:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.561 13:30:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.561 13:30:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.561 ************************************ 00:04:30.561 START TEST env_mem_callbacks 00:04:30.561 ************************************ 00:04:30.561 13:30:19 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.561 EAL: Detected CPU lcores: 128 00:04:30.561 EAL: Detected NUMA nodes: 2 00:04:30.561 EAL: Detected static linkage of DPDK 00:04:30.561 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.561 EAL: Selected IOVA mode 'VA' 00:04:30.561 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.561 EAL: VFIO support initialized 00:04:30.561 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.561 00:04:30.561 00:04:30.561 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.561 http://cunit.sourceforge.net/ 00:04:30.561 00:04:30.561 00:04:30.561 Suite: memory 00:04:30.561 Test: test ... 00:04:30.561 register 0x200000200000 2097152 00:04:30.561 malloc 3145728 00:04:30.561 register 0x200000400000 4194304 00:04:30.561 buf 0x200000500000 len 3145728 PASSED 00:04:30.561 malloc 64 00:04:30.561 buf 0x2000004fff40 len 64 PASSED 00:04:30.561 malloc 4194304 00:04:30.561 register 0x200000800000 6291456 00:04:30.561 buf 0x200000a00000 len 4194304 PASSED 00:04:30.561 free 0x200000500000 3145728 00:04:30.561 free 0x2000004fff40 64 00:04:30.561 unregister 0x200000400000 4194304 PASSED 00:04:30.561 free 0x200000a00000 4194304 00:04:30.561 unregister 0x200000800000 6291456 PASSED 00:04:30.561 malloc 8388608 00:04:30.561 register 0x200000400000 10485760 00:04:30.561 buf 0x200000600000 len 8388608 PASSED 00:04:30.561 free 0x200000600000 8388608 00:04:30.561 unregister 0x200000400000 10485760 PASSED 00:04:30.561 passed 00:04:30.561 00:04:30.561 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.561 suites 1 1 n/a 0 0 00:04:30.561 tests 1 1 1 0 0 00:04:30.561 asserts 15 15 15 0 n/a 00:04:30.561 00:04:30.561 Elapsed time = 0.004 seconds 00:04:30.561 00:04:30.561 real 0m0.031s 00:04:30.561 user 0m0.011s 00:04:30.561 sys 0m0.019s 00:04:30.561 13:30:19 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.561 13:30:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:30.561 ************************************ 00:04:30.561 END TEST env_mem_callbacks 00:04:30.561 ************************************ 00:04:30.561 13:30:19 env -- common/autotest_common.sh@1142 -- # return 0 00:04:30.561 00:04:30.561 real 0m1.933s 00:04:30.561 user 0m0.897s 00:04:30.561 sys 0m0.817s 00:04:30.561 13:30:19 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.561 13:30:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.561 ************************************ 00:04:30.561 END TEST env 00:04:30.561 ************************************ 00:04:30.822 13:30:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.822 13:30:19 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:30.822 13:30:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.822 13:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.822 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:04:30.822 ************************************ 00:04:30.822 START TEST rpc 00:04:30.822 ************************************ 00:04:30.822 13:30:19 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:30.822 * Looking for test storage... 00:04:30.822 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:30.822 13:30:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2415157 00:04:30.822 13:30:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.822 13:30:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:30.822 13:30:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2415157 00:04:30.822 13:30:19 rpc -- common/autotest_common.sh@829 -- # '[' -z 2415157 ']' 00:04:30.822 13:30:19 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.822 13:30:19 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.822 13:30:19 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.822 13:30:19 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.822 13:30:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.822 [2024-07-12 13:30:19.332061] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:30.822 [2024-07-12 13:30:19.332142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415157 ] 00:04:30.822 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.822 [2024-07-12 13:30:19.398006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.083 [2024-07-12 13:30:19.474838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:31.083 [2024-07-12 13:30:19.474876] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2415157' to capture a snapshot of events at runtime. 00:04:31.083 [2024-07-12 13:30:19.474884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:31.083 [2024-07-12 13:30:19.474891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:31.083 [2024-07-12 13:30:19.474896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2415157 for offline analysis/debug. 00:04:31.083 [2024-07-12 13:30:19.474920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.654 13:30:20 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.655 13:30:20 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:31.655 13:30:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:31.655 13:30:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:31.655 13:30:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:31.655 13:30:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:31.655 13:30:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.655 13:30:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.655 13:30:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.655 ************************************ 00:04:31.655 START TEST rpc_integrity 00:04:31.655 ************************************ 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.655 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.655 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.655 { 00:04:31.655 "name": "Malloc0", 00:04:31.655 "aliases": [ 00:04:31.655 "ab7d3f8f-5670-44e0-95e1-2c5f49e82de3" 00:04:31.655 ], 00:04:31.655 "product_name": "Malloc disk", 00:04:31.655 "block_size": 512, 00:04:31.655 "num_blocks": 16384, 00:04:31.655 "uuid": "ab7d3f8f-5670-44e0-95e1-2c5f49e82de3", 00:04:31.655 "assigned_rate_limits": { 00:04:31.655 "rw_ios_per_sec": 0, 00:04:31.655 "rw_mbytes_per_sec": 0, 00:04:31.655 "r_mbytes_per_sec": 0, 00:04:31.655 "w_mbytes_per_sec": 0 00:04:31.655 }, 00:04:31.655 "claimed": false, 00:04:31.655 "zoned": false, 00:04:31.655 "supported_io_types": { 00:04:31.655 "read": true, 00:04:31.655 "write": true, 00:04:31.655 "unmap": true, 00:04:31.655 "flush": true, 00:04:31.655 "reset": true, 00:04:31.655 "nvme_admin": false, 00:04:31.655 "nvme_io": false, 00:04:31.655 "nvme_io_md": false, 00:04:31.655 "write_zeroes": true, 00:04:31.655 "zcopy": true, 00:04:31.655 "get_zone_info": false, 00:04:31.655 "zone_management": false, 00:04:31.655 "zone_append": false, 00:04:31.655 "compare": false, 00:04:31.655 "compare_and_write": false, 00:04:31.655 "abort": true, 00:04:31.655 "seek_hole": false, 00:04:31.655 "seek_data": false, 00:04:31.655 "copy": true, 00:04:31.655 "nvme_iov_md": false 00:04:31.655 }, 00:04:31.655 "memory_domains": [ 00:04:31.655 { 00:04:31.655 "dma_device_id": "system", 00:04:31.655 "dma_device_type": 1 00:04:31.655 }, 00:04:31.655 { 00:04:31.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.655 "dma_device_type": 2 00:04:31.655 } 00:04:31.655 ], 00:04:31.655 "driver_specific": {} 00:04:31.655 } 00:04:31.655 ]' 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.916 [2024-07-12 13:30:20.287200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:31.916 [2024-07-12 13:30:20.287241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.916 [2024-07-12 13:30:20.287256] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e77f50 00:04:31.916 [2024-07-12 13:30:20.287264] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.916 [2024-07-12 13:30:20.288189] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.916 [2024-07-12 13:30:20.288210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.916 Passthru0 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.916 { 00:04:31.916 "name": "Malloc0", 00:04:31.916 "aliases": [ 00:04:31.916 "ab7d3f8f-5670-44e0-95e1-2c5f49e82de3" 00:04:31.916 ], 00:04:31.916 "product_name": "Malloc disk", 00:04:31.916 "block_size": 512, 00:04:31.916 "num_blocks": 16384, 00:04:31.916 "uuid": "ab7d3f8f-5670-44e0-95e1-2c5f49e82de3", 00:04:31.916 "assigned_rate_limits": { 00:04:31.916 "rw_ios_per_sec": 0, 00:04:31.916 "rw_mbytes_per_sec": 0, 00:04:31.916 "r_mbytes_per_sec": 0, 00:04:31.916 "w_mbytes_per_sec": 0 00:04:31.916 }, 00:04:31.916 "claimed": true, 00:04:31.916 "claim_type": "exclusive_write", 00:04:31.916 "zoned": false, 00:04:31.916 "supported_io_types": { 00:04:31.916 "read": true, 00:04:31.916 "write": true, 00:04:31.916 "unmap": true, 00:04:31.916 "flush": true, 00:04:31.916 "reset": true, 00:04:31.916 "nvme_admin": false, 00:04:31.916 "nvme_io": false, 00:04:31.916 "nvme_io_md": false, 00:04:31.916 "write_zeroes": true, 00:04:31.916 "zcopy": true, 00:04:31.916 "get_zone_info": false, 00:04:31.916 "zone_management": false, 00:04:31.916 "zone_append": false, 00:04:31.916 "compare": false, 00:04:31.916 "compare_and_write": false, 00:04:31.916 "abort": true, 00:04:31.916 "seek_hole": false, 00:04:31.916 "seek_data": false, 00:04:31.916 "copy": true, 00:04:31.916 "nvme_iov_md": false 00:04:31.916 }, 00:04:31.916 "memory_domains": [ 00:04:31.916 { 00:04:31.916 "dma_device_id": "system", 00:04:31.916 "dma_device_type": 1 00:04:31.916 }, 00:04:31.916 { 00:04:31.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.916 "dma_device_type": 2 00:04:31.916 } 00:04:31.916 ], 00:04:31.916 "driver_specific": {} 00:04:31.916 }, 00:04:31.916 { 00:04:31.916 "name": "Passthru0", 00:04:31.916 "aliases": [ 00:04:31.916 "005b8086-47cc-5a3f-9247-5cb634811a76" 00:04:31.916 ], 00:04:31.916 "product_name": "passthru", 00:04:31.916 "block_size": 512, 00:04:31.916 "num_blocks": 16384, 00:04:31.916 "uuid": "005b8086-47cc-5a3f-9247-5cb634811a76", 00:04:31.916 "assigned_rate_limits": { 00:04:31.916 "rw_ios_per_sec": 0, 00:04:31.916 "rw_mbytes_per_sec": 0, 00:04:31.916 "r_mbytes_per_sec": 0, 00:04:31.916 "w_mbytes_per_sec": 0 00:04:31.916 }, 00:04:31.916 "claimed": false, 00:04:31.916 "zoned": false, 00:04:31.916 "supported_io_types": { 00:04:31.916 "read": true, 00:04:31.916 "write": true, 00:04:31.916 "unmap": true, 00:04:31.916 "flush": true, 00:04:31.916 "reset": true, 00:04:31.916 "nvme_admin": false, 00:04:31.916 "nvme_io": false, 00:04:31.916 "nvme_io_md": false, 00:04:31.916 "write_zeroes": true, 00:04:31.916 "zcopy": true, 00:04:31.916 "get_zone_info": false, 00:04:31.916 "zone_management": false, 00:04:31.916 "zone_append": false, 00:04:31.916 "compare": false, 00:04:31.916 "compare_and_write": false, 00:04:31.916 "abort": true, 00:04:31.916 "seek_hole": false, 00:04:31.916 "seek_data": false, 00:04:31.916 "copy": true, 00:04:31.916 "nvme_iov_md": false 00:04:31.916 }, 00:04:31.916 "memory_domains": [ 00:04:31.916 { 00:04:31.916 "dma_device_id": "system", 00:04:31.916 "dma_device_type": 1 00:04:31.916 }, 00:04:31.916 { 00:04:31.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.916 "dma_device_type": 2 00:04:31.916 } 00:04:31.916 ], 00:04:31.916 "driver_specific": { 00:04:31.916 "passthru": { 00:04:31.916 "name": "Passthru0", 00:04:31.916 "base_bdev_name": "Malloc0" 00:04:31.916 } 00:04:31.916 } 00:04:31.916 } 00:04:31.916 ]' 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.916 13:30:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.916 00:04:31.916 real 0m0.296s 00:04:31.916 user 0m0.190s 00:04:31.916 sys 0m0.043s 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.916 13:30:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.916 ************************************ 00:04:31.916 END TEST rpc_integrity 00:04:31.916 ************************************ 00:04:31.916 13:30:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:31.916 13:30:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:31.916 13:30:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.916 13:30:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.916 13:30:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 ************************************ 00:04:32.177 START TEST rpc_plugins 00:04:32.177 ************************************ 00:04:32.177 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:32.177 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:32.177 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.177 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.177 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:32.177 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:32.177 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.177 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.177 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:32.177 { 00:04:32.177 "name": "Malloc1", 00:04:32.177 "aliases": [ 00:04:32.177 "93600878-9fd4-435a-974b-1d0a5668f92f" 00:04:32.177 ], 00:04:32.177 "product_name": "Malloc disk", 00:04:32.177 "block_size": 4096, 00:04:32.178 "num_blocks": 256, 00:04:32.178 "uuid": "93600878-9fd4-435a-974b-1d0a5668f92f", 00:04:32.178 "assigned_rate_limits": { 00:04:32.178 "rw_ios_per_sec": 0, 00:04:32.178 "rw_mbytes_per_sec": 0, 00:04:32.178 "r_mbytes_per_sec": 0, 00:04:32.178 "w_mbytes_per_sec": 0 00:04:32.178 }, 00:04:32.178 "claimed": false, 00:04:32.178 "zoned": false, 00:04:32.178 "supported_io_types": { 00:04:32.178 "read": true, 00:04:32.178 "write": true, 00:04:32.178 "unmap": true, 00:04:32.178 "flush": true, 00:04:32.178 "reset": true, 00:04:32.178 "nvme_admin": false, 00:04:32.178 "nvme_io": false, 00:04:32.178 "nvme_io_md": false, 00:04:32.178 "write_zeroes": true, 00:04:32.178 "zcopy": true, 00:04:32.178 "get_zone_info": false, 00:04:32.178 "zone_management": false, 00:04:32.178 "zone_append": false, 00:04:32.178 "compare": false, 00:04:32.178 "compare_and_write": false, 00:04:32.178 "abort": true, 00:04:32.178 "seek_hole": false, 00:04:32.178 "seek_data": false, 00:04:32.178 "copy": true, 00:04:32.178 "nvme_iov_md": false 00:04:32.178 }, 00:04:32.178 "memory_domains": [ 00:04:32.178 { 00:04:32.178 "dma_device_id": "system", 00:04:32.178 "dma_device_type": 1 00:04:32.178 }, 00:04:32.178 { 00:04:32.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.178 "dma_device_type": 2 00:04:32.178 } 00:04:32.178 ], 00:04:32.178 "driver_specific": {} 00:04:32.178 } 00:04:32.178 ]' 00:04:32.178 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:32.178 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.178 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.178 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.178 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.178 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:32.178 13:30:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.178 00:04:32.178 real 0m0.129s 00:04:32.178 user 0m0.079s 00:04:32.178 sys 0m0.019s 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.178 13:30:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.178 ************************************ 00:04:32.178 END TEST rpc_plugins 00:04:32.178 ************************************ 00:04:32.178 13:30:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.178 13:30:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.178 13:30:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.178 13:30:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.178 13:30:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.178 ************************************ 00:04:32.178 START TEST rpc_trace_cmd_test 00:04:32.178 ************************************ 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:32.178 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2415157", 00:04:32.178 "tpoint_group_mask": "0x8", 00:04:32.178 "iscsi_conn": { 00:04:32.178 "mask": "0x2", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "scsi": { 00:04:32.178 "mask": "0x4", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "bdev": { 00:04:32.178 "mask": "0x8", 00:04:32.178 "tpoint_mask": "0xffffffffffffffff" 00:04:32.178 }, 00:04:32.178 "nvmf_rdma": { 00:04:32.178 "mask": "0x10", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "nvmf_tcp": { 00:04:32.178 "mask": "0x20", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "ftl": { 00:04:32.178 "mask": "0x40", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "blobfs": { 00:04:32.178 "mask": "0x80", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "dsa": { 00:04:32.178 "mask": "0x200", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "thread": { 00:04:32.178 "mask": "0x400", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "nvme_pcie": { 00:04:32.178 "mask": "0x800", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "iaa": { 00:04:32.178 "mask": "0x1000", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "nvme_tcp": { 00:04:32.178 "mask": "0x2000", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "bdev_nvme": { 00:04:32.178 "mask": "0x4000", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 }, 00:04:32.178 "sock": { 00:04:32.178 "mask": "0x8000", 00:04:32.178 "tpoint_mask": "0x0" 00:04:32.178 } 00:04:32.178 }' 00:04:32.178 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.439 00:04:32.439 real 0m0.239s 00:04:32.439 user 0m0.202s 00:04:32.439 sys 0m0.027s 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.439 13:30:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.439 ************************************ 00:04:32.439 END TEST rpc_trace_cmd_test 00:04:32.439 ************************************ 00:04:32.439 13:30:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.439 13:30:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.439 13:30:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.439 13:30:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.439 13:30:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.439 13:30:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.439 13:30:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.439 ************************************ 00:04:32.439 START TEST rpc_daemon_integrity 00:04:32.439 ************************************ 00:04:32.439 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:32.439 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.439 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.439 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.700 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.700 { 00:04:32.700 "name": "Malloc2", 00:04:32.700 "aliases": [ 00:04:32.700 "31a641b9-72f4-4e95-8e07-2a274220b33c" 00:04:32.700 ], 00:04:32.700 "product_name": "Malloc disk", 00:04:32.700 "block_size": 512, 00:04:32.700 "num_blocks": 16384, 00:04:32.700 "uuid": "31a641b9-72f4-4e95-8e07-2a274220b33c", 00:04:32.700 "assigned_rate_limits": { 00:04:32.700 "rw_ios_per_sec": 0, 00:04:32.700 "rw_mbytes_per_sec": 0, 00:04:32.700 "r_mbytes_per_sec": 0, 00:04:32.700 "w_mbytes_per_sec": 0 00:04:32.700 }, 00:04:32.700 "claimed": false, 00:04:32.700 "zoned": false, 00:04:32.700 "supported_io_types": { 00:04:32.700 "read": true, 00:04:32.700 "write": true, 00:04:32.700 "unmap": true, 00:04:32.700 "flush": true, 00:04:32.700 "reset": true, 00:04:32.700 "nvme_admin": false, 00:04:32.700 "nvme_io": false, 00:04:32.700 "nvme_io_md": false, 00:04:32.700 "write_zeroes": true, 00:04:32.700 "zcopy": true, 00:04:32.700 "get_zone_info": false, 00:04:32.700 "zone_management": false, 00:04:32.700 "zone_append": false, 00:04:32.700 "compare": false, 00:04:32.700 "compare_and_write": false, 00:04:32.700 "abort": true, 00:04:32.700 "seek_hole": false, 00:04:32.700 "seek_data": false, 00:04:32.700 "copy": true, 00:04:32.700 "nvme_iov_md": false 00:04:32.700 }, 00:04:32.700 "memory_domains": [ 00:04:32.700 { 00:04:32.700 "dma_device_id": "system", 00:04:32.700 "dma_device_type": 1 00:04:32.700 }, 00:04:32.700 { 00:04:32.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.701 "dma_device_type": 2 00:04:32.701 } 00:04:32.701 ], 00:04:32.701 "driver_specific": {} 00:04:32.701 } 00:04:32.701 ]' 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.701 [2024-07-12 13:30:21.153388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.701 [2024-07-12 13:30:21.153415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.701 [2024-07-12 13:30:21.153431] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5004c20 00:04:32.701 [2024-07-12 13:30:21.153439] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.701 [2024-07-12 13:30:21.154265] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.701 [2024-07-12 13:30:21.154285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.701 Passthru0 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.701 { 00:04:32.701 "name": "Malloc2", 00:04:32.701 "aliases": [ 00:04:32.701 "31a641b9-72f4-4e95-8e07-2a274220b33c" 00:04:32.701 ], 00:04:32.701 "product_name": "Malloc disk", 00:04:32.701 "block_size": 512, 00:04:32.701 "num_blocks": 16384, 00:04:32.701 "uuid": "31a641b9-72f4-4e95-8e07-2a274220b33c", 00:04:32.701 "assigned_rate_limits": { 00:04:32.701 "rw_ios_per_sec": 0, 00:04:32.701 "rw_mbytes_per_sec": 0, 00:04:32.701 "r_mbytes_per_sec": 0, 00:04:32.701 "w_mbytes_per_sec": 0 00:04:32.701 }, 00:04:32.701 "claimed": true, 00:04:32.701 "claim_type": "exclusive_write", 00:04:32.701 "zoned": false, 00:04:32.701 "supported_io_types": { 00:04:32.701 "read": true, 00:04:32.701 "write": true, 00:04:32.701 "unmap": true, 00:04:32.701 "flush": true, 00:04:32.701 "reset": true, 00:04:32.701 "nvme_admin": false, 00:04:32.701 "nvme_io": false, 00:04:32.701 "nvme_io_md": false, 00:04:32.701 "write_zeroes": true, 00:04:32.701 "zcopy": true, 00:04:32.701 "get_zone_info": false, 00:04:32.701 "zone_management": false, 00:04:32.701 "zone_append": false, 00:04:32.701 "compare": false, 00:04:32.701 "compare_and_write": false, 00:04:32.701 "abort": true, 00:04:32.701 "seek_hole": false, 00:04:32.701 "seek_data": false, 00:04:32.701 "copy": true, 00:04:32.701 "nvme_iov_md": false 00:04:32.701 }, 00:04:32.701 "memory_domains": [ 00:04:32.701 { 00:04:32.701 "dma_device_id": "system", 00:04:32.701 "dma_device_type": 1 00:04:32.701 }, 00:04:32.701 { 00:04:32.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.701 "dma_device_type": 2 00:04:32.701 } 00:04:32.701 ], 00:04:32.701 "driver_specific": {} 00:04:32.701 }, 00:04:32.701 { 00:04:32.701 "name": "Passthru0", 00:04:32.701 "aliases": [ 00:04:32.701 "6a5dd2ca-1221-5a41-8240-6081627f8f8d" 00:04:32.701 ], 00:04:32.701 "product_name": "passthru", 00:04:32.701 "block_size": 512, 00:04:32.701 "num_blocks": 16384, 00:04:32.701 "uuid": "6a5dd2ca-1221-5a41-8240-6081627f8f8d", 00:04:32.701 "assigned_rate_limits": { 00:04:32.701 "rw_ios_per_sec": 0, 00:04:32.701 "rw_mbytes_per_sec": 0, 00:04:32.701 "r_mbytes_per_sec": 0, 00:04:32.701 "w_mbytes_per_sec": 0 00:04:32.701 }, 00:04:32.701 "claimed": false, 00:04:32.701 "zoned": false, 00:04:32.701 "supported_io_types": { 00:04:32.701 "read": true, 00:04:32.701 "write": true, 00:04:32.701 "unmap": true, 00:04:32.701 "flush": true, 00:04:32.701 "reset": true, 00:04:32.701 "nvme_admin": false, 00:04:32.701 "nvme_io": false, 00:04:32.701 "nvme_io_md": false, 00:04:32.701 "write_zeroes": true, 00:04:32.701 "zcopy": true, 00:04:32.701 "get_zone_info": false, 00:04:32.701 "zone_management": false, 00:04:32.701 "zone_append": false, 00:04:32.701 "compare": false, 00:04:32.701 "compare_and_write": false, 00:04:32.701 "abort": true, 00:04:32.701 "seek_hole": false, 00:04:32.701 "seek_data": false, 00:04:32.701 "copy": true, 00:04:32.701 "nvme_iov_md": false 00:04:32.701 }, 00:04:32.701 "memory_domains": [ 00:04:32.701 { 00:04:32.701 "dma_device_id": "system", 00:04:32.701 "dma_device_type": 1 00:04:32.701 }, 00:04:32.701 { 00:04:32.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.701 "dma_device_type": 2 00:04:32.701 } 00:04:32.701 ], 00:04:32.701 "driver_specific": { 00:04:32.701 "passthru": { 00:04:32.701 "name": "Passthru0", 00:04:32.701 "base_bdev_name": "Malloc2" 00:04:32.701 } 00:04:32.701 } 00:04:32.701 } 00:04:32.701 ]' 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.701 00:04:32.701 real 0m0.256s 00:04:32.701 user 0m0.174s 00:04:32.701 sys 0m0.025s 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.701 13:30:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.701 ************************************ 00:04:32.701 END TEST rpc_daemon_integrity 00:04:32.701 ************************************ 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.961 13:30:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:32.961 13:30:21 rpc -- rpc/rpc.sh@84 -- # killprocess 2415157 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@948 -- # '[' -z 2415157 ']' 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@952 -- # kill -0 2415157 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@953 -- # uname 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2415157 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2415157' 00:04:32.961 killing process with pid 2415157 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@967 -- # kill 2415157 00:04:32.961 13:30:21 rpc -- common/autotest_common.sh@972 -- # wait 2415157 00:04:33.222 00:04:33.222 real 0m2.365s 00:04:33.222 user 0m3.090s 00:04:33.222 sys 0m0.661s 00:04:33.222 13:30:21 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.222 13:30:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.222 ************************************ 00:04:33.222 END TEST rpc 00:04:33.222 ************************************ 00:04:33.222 13:30:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.222 13:30:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.222 13:30:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.222 13:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.222 13:30:21 -- common/autotest_common.sh@10 -- # set +x 00:04:33.222 ************************************ 00:04:33.222 START TEST skip_rpc 00:04:33.222 ************************************ 00:04:33.222 13:30:21 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.222 * Looking for test storage... 00:04:33.222 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:33.222 13:30:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:33.222 13:30:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:33.222 13:30:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:33.222 13:30:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.222 13:30:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.222 13:30:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.222 ************************************ 00:04:33.222 START TEST skip_rpc 00:04:33.222 ************************************ 00:04:33.222 13:30:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:33.222 13:30:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2415786 00:04:33.222 13:30:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.222 13:30:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:33.222 13:30:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:33.222 [2024-07-12 13:30:21.801713] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:33.222 [2024-07-12 13:30:21.801776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415786 ] 00:04:33.482 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.482 [2024-07-12 13:30:21.864131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.482 [2024-07-12 13:30:21.934303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.770 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2415786 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2415786 ']' 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2415786 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2415786 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2415786' 00:04:38.771 killing process with pid 2415786 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2415786 00:04:38.771 13:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2415786 00:04:38.771 00:04:38.771 real 0m5.263s 00:04:38.771 user 0m5.079s 00:04:38.771 sys 0m0.214s 00:04:38.771 13:30:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.771 13:30:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.771 ************************************ 00:04:38.771 END TEST skip_rpc 00:04:38.771 ************************************ 00:04:38.771 13:30:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.771 13:30:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:38.771 13:30:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.771 13:30:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.771 13:30:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.771 ************************************ 00:04:38.771 START TEST skip_rpc_with_json 00:04:38.771 ************************************ 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2416820 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2416820 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2416820 ']' 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.771 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.771 [2024-07-12 13:30:27.136722] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:38.771 [2024-07-12 13:30:27.136804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416820 ] 00:04:38.771 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.771 [2024-07-12 13:30:27.198891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.771 [2024-07-12 13:30:27.265279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.341 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.341 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:39.341 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:39.341 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.341 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.341 [2024-07-12 13:30:27.920253] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:39.603 request: 00:04:39.603 { 00:04:39.603 "trtype": "tcp", 00:04:39.603 "method": "nvmf_get_transports", 00:04:39.603 "req_id": 1 00:04:39.603 } 00:04:39.603 Got JSON-RPC error response 00:04:39.603 response: 00:04:39.603 { 00:04:39.603 "code": -19, 00:04:39.603 "message": "No such device" 00:04:39.603 } 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 [2024-07-12 13:30:27.928341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.603 13:30:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.603 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.603 13:30:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:39.603 { 00:04:39.603 "subsystems": [ 00:04:39.603 { 00:04:39.603 "subsystem": "scheduler", 00:04:39.603 "config": [ 00:04:39.603 { 00:04:39.603 "method": "framework_set_scheduler", 00:04:39.603 "params": { 00:04:39.603 "name": "static" 00:04:39.603 } 00:04:39.603 } 00:04:39.603 ] 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "subsystem": "vmd", 00:04:39.603 "config": [] 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "subsystem": "sock", 00:04:39.603 "config": [ 00:04:39.603 { 00:04:39.603 "method": "sock_set_default_impl", 00:04:39.603 "params": { 00:04:39.603 "impl_name": "posix" 00:04:39.603 } 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "method": "sock_impl_set_options", 00:04:39.603 "params": { 00:04:39.603 "impl_name": "ssl", 00:04:39.603 "recv_buf_size": 4096, 00:04:39.603 "send_buf_size": 4096, 00:04:39.603 "enable_recv_pipe": true, 00:04:39.603 "enable_quickack": false, 00:04:39.603 "enable_placement_id": 0, 00:04:39.603 "enable_zerocopy_send_server": true, 00:04:39.603 "enable_zerocopy_send_client": false, 00:04:39.603 "zerocopy_threshold": 0, 00:04:39.603 "tls_version": 0, 00:04:39.603 "enable_ktls": false 00:04:39.603 } 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "method": "sock_impl_set_options", 00:04:39.603 "params": { 00:04:39.603 "impl_name": "posix", 00:04:39.603 "recv_buf_size": 2097152, 00:04:39.603 "send_buf_size": 2097152, 00:04:39.603 "enable_recv_pipe": true, 00:04:39.603 "enable_quickack": false, 00:04:39.603 "enable_placement_id": 0, 00:04:39.603 "enable_zerocopy_send_server": true, 00:04:39.603 "enable_zerocopy_send_client": false, 00:04:39.603 "zerocopy_threshold": 0, 00:04:39.603 "tls_version": 0, 00:04:39.603 "enable_ktls": false 00:04:39.603 } 00:04:39.603 } 00:04:39.603 ] 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "subsystem": "iobuf", 00:04:39.603 "config": [ 00:04:39.603 { 00:04:39.603 "method": "iobuf_set_options", 00:04:39.603 "params": { 00:04:39.603 "small_pool_count": 8192, 00:04:39.603 "large_pool_count": 1024, 00:04:39.603 "small_bufsize": 8192, 00:04:39.603 "large_bufsize": 135168 00:04:39.603 } 00:04:39.603 } 00:04:39.603 ] 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "subsystem": "keyring", 00:04:39.603 "config": [] 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "subsystem": "vfio_user_target", 00:04:39.603 "config": null 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "subsystem": "accel", 00:04:39.603 "config": [ 00:04:39.603 { 00:04:39.603 "method": "accel_set_options", 00:04:39.603 "params": { 00:04:39.603 "small_cache_size": 128, 00:04:39.603 "large_cache_size": 16, 00:04:39.603 "task_count": 2048, 00:04:39.603 "sequence_count": 2048, 00:04:39.603 "buf_count": 2048 00:04:39.603 } 00:04:39.603 } 00:04:39.603 ] 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "subsystem": "bdev", 00:04:39.603 "config": [ 00:04:39.603 { 00:04:39.603 "method": "bdev_set_options", 00:04:39.603 "params": { 00:04:39.603 "bdev_io_pool_size": 65535, 00:04:39.603 "bdev_io_cache_size": 256, 00:04:39.603 "bdev_auto_examine": true, 00:04:39.603 "iobuf_small_cache_size": 128, 00:04:39.603 "iobuf_large_cache_size": 16 00:04:39.603 } 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "method": "bdev_raid_set_options", 00:04:39.603 "params": { 00:04:39.603 "process_window_size_kb": 1024 00:04:39.603 } 00:04:39.603 }, 00:04:39.603 { 00:04:39.603 "method": "bdev_nvme_set_options", 00:04:39.603 "params": { 00:04:39.603 "action_on_timeout": "none", 00:04:39.603 "timeout_us": 0, 00:04:39.603 "timeout_admin_us": 0, 00:04:39.603 "keep_alive_timeout_ms": 10000, 00:04:39.603 "arbitration_burst": 0, 00:04:39.603 "low_priority_weight": 0, 00:04:39.603 "medium_priority_weight": 0, 00:04:39.603 "high_priority_weight": 0, 00:04:39.603 "nvme_adminq_poll_period_us": 10000, 00:04:39.603 "nvme_ioq_poll_period_us": 0, 00:04:39.603 "io_queue_requests": 0, 00:04:39.603 "delay_cmd_submit": true, 00:04:39.603 "transport_retry_count": 4, 00:04:39.603 "bdev_retry_count": 3, 00:04:39.603 "transport_ack_timeout": 0, 00:04:39.603 "ctrlr_loss_timeout_sec": 0, 00:04:39.603 "reconnect_delay_sec": 0, 00:04:39.603 "fast_io_fail_timeout_sec": 0, 00:04:39.603 "disable_auto_failback": false, 00:04:39.603 "generate_uuids": false, 00:04:39.603 "transport_tos": 0, 00:04:39.603 "nvme_error_stat": false, 00:04:39.603 "rdma_srq_size": 0, 00:04:39.603 "io_path_stat": false, 00:04:39.603 "allow_accel_sequence": false, 00:04:39.603 "rdma_max_cq_size": 0, 00:04:39.603 "rdma_cm_event_timeout_ms": 0, 00:04:39.603 "dhchap_digests": [ 00:04:39.603 "sha256", 00:04:39.603 "sha384", 00:04:39.603 "sha512" 00:04:39.603 ], 00:04:39.603 "dhchap_dhgroups": [ 00:04:39.603 "null", 00:04:39.603 "ffdhe2048", 00:04:39.603 "ffdhe3072", 00:04:39.603 "ffdhe4096", 00:04:39.604 "ffdhe6144", 00:04:39.604 "ffdhe8192" 00:04:39.604 ] 00:04:39.604 } 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "method": "bdev_nvme_set_hotplug", 00:04:39.604 "params": { 00:04:39.604 "period_us": 100000, 00:04:39.604 "enable": false 00:04:39.604 } 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "method": "bdev_iscsi_set_options", 00:04:39.604 "params": { 00:04:39.604 "timeout_sec": 30 00:04:39.604 } 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "method": "bdev_wait_for_examine" 00:04:39.604 } 00:04:39.604 ] 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "subsystem": "nvmf", 00:04:39.604 "config": [ 00:04:39.604 { 00:04:39.604 "method": "nvmf_set_config", 00:04:39.604 "params": { 00:04:39.604 "discovery_filter": "match_any", 00:04:39.604 "admin_cmd_passthru": { 00:04:39.604 "identify_ctrlr": false 00:04:39.604 } 00:04:39.604 } 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "method": "nvmf_set_max_subsystems", 00:04:39.604 "params": { 00:04:39.604 "max_subsystems": 1024 00:04:39.604 } 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "method": "nvmf_set_crdt", 00:04:39.604 "params": { 00:04:39.604 "crdt1": 0, 00:04:39.604 "crdt2": 0, 00:04:39.604 "crdt3": 0 00:04:39.604 } 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "method": "nvmf_create_transport", 00:04:39.604 "params": { 00:04:39.604 "trtype": "TCP", 00:04:39.604 "max_queue_depth": 128, 00:04:39.604 "max_io_qpairs_per_ctrlr": 127, 00:04:39.604 "in_capsule_data_size": 4096, 00:04:39.604 "max_io_size": 131072, 00:04:39.604 "io_unit_size": 131072, 00:04:39.604 "max_aq_depth": 128, 00:04:39.604 "num_shared_buffers": 511, 00:04:39.604 "buf_cache_size": 4294967295, 00:04:39.604 "dif_insert_or_strip": false, 00:04:39.604 "zcopy": false, 00:04:39.604 "c2h_success": true, 00:04:39.604 "sock_priority": 0, 00:04:39.604 "abort_timeout_sec": 1, 00:04:39.604 "ack_timeout": 0, 00:04:39.604 "data_wr_pool_size": 0 00:04:39.604 } 00:04:39.604 } 00:04:39.604 ] 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "subsystem": "nbd", 00:04:39.604 "config": [] 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "subsystem": "ublk", 00:04:39.604 "config": [] 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "subsystem": "vhost_blk", 00:04:39.604 "config": [] 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "subsystem": "scsi", 00:04:39.604 "config": null 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "subsystem": "iscsi", 00:04:39.604 "config": [ 00:04:39.604 { 00:04:39.604 "method": "iscsi_set_options", 00:04:39.604 "params": { 00:04:39.604 "node_base": "iqn.2016-06.io.spdk", 00:04:39.604 "max_sessions": 128, 00:04:39.604 "max_connections_per_session": 2, 00:04:39.604 "max_queue_depth": 64, 00:04:39.604 "default_time2wait": 2, 00:04:39.604 "default_time2retain": 20, 00:04:39.604 "first_burst_length": 8192, 00:04:39.604 "immediate_data": true, 00:04:39.604 "allow_duplicated_isid": false, 00:04:39.604 "error_recovery_level": 0, 00:04:39.604 "nop_timeout": 60, 00:04:39.604 "nop_in_interval": 30, 00:04:39.604 "disable_chap": false, 00:04:39.604 "require_chap": false, 00:04:39.604 "mutual_chap": false, 00:04:39.604 "chap_group": 0, 00:04:39.604 "max_large_datain_per_connection": 64, 00:04:39.604 "max_r2t_per_connection": 4, 00:04:39.604 "pdu_pool_size": 36864, 00:04:39.604 "immediate_data_pool_size": 16384, 00:04:39.604 "data_out_pool_size": 2048 00:04:39.604 } 00:04:39.604 } 00:04:39.604 ] 00:04:39.604 }, 00:04:39.604 { 00:04:39.604 "subsystem": "vhost_scsi", 00:04:39.604 "config": [] 00:04:39.604 } 00:04:39.604 ] 00:04:39.604 } 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2416820 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2416820 ']' 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2416820 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2416820 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2416820' 00:04:39.604 killing process with pid 2416820 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2416820 00:04:39.604 13:30:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2416820 00:04:39.866 13:30:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2417160 00:04:39.866 13:30:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.866 13:30:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2417160 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2417160 ']' 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2417160 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2417160 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2417160' 00:04:45.149 killing process with pid 2417160 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2417160 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2417160 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:45.149 00:04:45.149 real 0m6.485s 00:04:45.149 user 0m6.345s 00:04:45.149 sys 0m0.495s 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.149 ************************************ 00:04:45.149 END TEST skip_rpc_with_json 00:04:45.149 ************************************ 00:04:45.149 13:30:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.149 13:30:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:45.149 13:30:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.149 13:30:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.149 13:30:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.149 ************************************ 00:04:45.149 START TEST skip_rpc_with_delay 00:04:45.149 ************************************ 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.149 [2024-07-12 13:30:33.698561] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:45.149 [2024-07-12 13:30:33.698712] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:45.149 00:04:45.149 real 0m0.042s 00:04:45.149 user 0m0.021s 00:04:45.149 sys 0m0.021s 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.149 13:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:45.149 ************************************ 00:04:45.150 END TEST skip_rpc_with_delay 00:04:45.150 ************************************ 00:04:45.410 13:30:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.410 13:30:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:45.410 13:30:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:45.410 13:30:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:45.410 13:30:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.410 13:30:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.410 13:30:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.410 ************************************ 00:04:45.410 START TEST exit_on_failed_rpc_init 00:04:45.410 ************************************ 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2418231 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2418231 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2418231 ']' 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.410 13:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.410 [2024-07-12 13:30:33.817364] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:45.410 [2024-07-12 13:30:33.817446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418231 ] 00:04:45.410 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.410 [2024-07-12 13:30:33.886114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.410 [2024-07-12 13:30:33.961154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.353 [2024-07-12 13:30:34.653415] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:46.353 [2024-07-12 13:30:34.653491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418510 ] 00:04:46.353 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.353 [2024-07-12 13:30:34.732768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.353 [2024-07-12 13:30:34.798613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.353 [2024-07-12 13:30:34.798698] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:46.353 [2024-07-12 13:30:34.798708] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:46.353 [2024-07-12 13:30:34.798715] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2418231 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2418231 ']' 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2418231 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418231 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418231' 00:04:46.353 killing process with pid 2418231 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2418231 00:04:46.353 13:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2418231 00:04:46.613 00:04:46.613 real 0m1.317s 00:04:46.613 user 0m1.518s 00:04:46.613 sys 0m0.383s 00:04:46.613 13:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.613 13:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.613 ************************************ 00:04:46.613 END TEST exit_on_failed_rpc_init 00:04:46.613 ************************************ 00:04:46.613 13:30:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.613 13:30:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:46.613 00:04:46.613 real 0m13.503s 00:04:46.613 user 0m13.104s 00:04:46.613 sys 0m1.388s 00:04:46.613 13:30:35 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.613 13:30:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.613 ************************************ 00:04:46.613 END TEST skip_rpc 00:04:46.613 ************************************ 00:04:46.613 13:30:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.613 13:30:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:46.613 13:30:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.613 13:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.613 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.874 ************************************ 00:04:46.874 START TEST rpc_client 00:04:46.874 ************************************ 00:04:46.874 13:30:35 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:46.874 * Looking for test storage... 00:04:46.874 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:46.874 13:30:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:46.874 OK 00:04:46.874 13:30:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:46.874 00:04:46.874 real 0m0.118s 00:04:46.874 user 0m0.051s 00:04:46.874 sys 0m0.074s 00:04:46.874 13:30:35 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.874 13:30:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:46.874 ************************************ 00:04:46.874 END TEST rpc_client 00:04:46.874 ************************************ 00:04:46.874 13:30:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.874 13:30:35 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:46.874 13:30:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.874 13:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.874 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.874 ************************************ 00:04:46.874 START TEST json_config 00:04:46.874 ************************************ 00:04:46.874 13:30:35 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:47.137 13:30:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.137 13:30:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.137 13:30:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.137 13:30:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config -- paths/export.sh@5 -- # export PATH 00:04:47.137 13:30:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@47 -- # : 0 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:47.137 13:30:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:47.137 WARNING: No tests are enabled so not running JSON configuration tests 00:04:47.137 13:30:35 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:47.137 00:04:47.137 real 0m0.096s 00:04:47.137 user 0m0.049s 00:04:47.137 sys 0m0.047s 00:04:47.137 13:30:35 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.137 13:30:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.137 ************************************ 00:04:47.137 END TEST json_config 00:04:47.137 ************************************ 00:04:47.137 13:30:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.137 13:30:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.137 13:30:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.137 13:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.137 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:04:47.137 ************************************ 00:04:47.137 START TEST json_config_extra_key 00:04:47.137 ************************************ 00:04:47.137 13:30:35 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:47.137 13:30:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.137 13:30:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.137 13:30:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.137 13:30:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:47.137 13:30:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:47.137 13:30:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:47.137 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.138 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:47.138 INFO: launching applications... 00:04:47.138 13:30:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2418720 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.138 Waiting for target to run... 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2418720 /var/tmp/spdk_tgt.sock 00:04:47.138 13:30:35 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2418720 ']' 00:04:47.138 13:30:35 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.138 13:30:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.138 13:30:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.138 13:30:35 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.138 13:30:35 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.138 13:30:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.138 [2024-07-12 13:30:35.709292] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:47.138 [2024-07-12 13:30:35.709366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418720 ] 00:04:47.399 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.660 [2024-07-12 13:30:35.992765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.660 [2024-07-12 13:30:36.046928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.231 13:30:36 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.231 13:30:36 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:48.231 00:04:48.231 13:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:48.231 INFO: shutting down applications... 00:04:48.231 13:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2418720 ]] 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2418720 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2418720 00:04:48.231 13:30:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.492 13:30:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.492 13:30:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.492 13:30:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2418720 00:04:48.492 13:30:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.492 13:30:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.492 13:30:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.492 13:30:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.492 SPDK target shutdown done 00:04:48.492 13:30:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.492 Success 00:04:48.492 00:04:48.492 real 0m1.441s 00:04:48.492 user 0m1.076s 00:04:48.492 sys 0m0.376s 00:04:48.492 13:30:37 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.492 13:30:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.492 ************************************ 00:04:48.492 END TEST json_config_extra_key 00:04:48.492 ************************************ 00:04:48.492 13:30:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.492 13:30:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.492 13:30:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.492 13:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.492 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:04:48.751 ************************************ 00:04:48.751 START TEST alias_rpc 00:04:48.751 ************************************ 00:04:48.751 13:30:37 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.751 * Looking for test storage... 00:04:48.751 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:48.751 13:30:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.751 13:30:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2419100 00:04:48.751 13:30:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2419100 00:04:48.751 13:30:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.751 13:30:37 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2419100 ']' 00:04:48.751 13:30:37 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.751 13:30:37 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.751 13:30:37 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.751 13:30:37 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.751 13:30:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.751 [2024-07-12 13:30:37.211930] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:48.751 [2024-07-12 13:30:37.212020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419100 ] 00:04:48.751 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.751 [2024-07-12 13:30:37.284225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.010 [2024-07-12 13:30:37.351589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.594 13:30:38 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.594 13:30:38 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:49.594 13:30:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:49.854 13:30:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2419100 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2419100 ']' 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2419100 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419100 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419100' 00:04:49.854 killing process with pid 2419100 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@967 -- # kill 2419100 00:04:49.854 13:30:38 alias_rpc -- common/autotest_common.sh@972 -- # wait 2419100 00:04:50.116 00:04:50.116 real 0m1.371s 00:04:50.116 user 0m1.507s 00:04:50.116 sys 0m0.376s 00:04:50.116 13:30:38 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.116 13:30:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.116 ************************************ 00:04:50.116 END TEST alias_rpc 00:04:50.116 ************************************ 00:04:50.116 13:30:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.116 13:30:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:50.116 13:30:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.116 13:30:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.116 13:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.116 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:04:50.116 ************************************ 00:04:50.116 START TEST spdkcli_tcp 00:04:50.116 ************************************ 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.116 * Looking for test storage... 00:04:50.116 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2419489 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2419489 00:04:50.116 13:30:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2419489 ']' 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.116 13:30:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.116 [2024-07-12 13:30:38.646086] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:50.116 [2024-07-12 13:30:38.646171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419489 ] 00:04:50.116 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.376 [2024-07-12 13:30:38.714990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.376 [2024-07-12 13:30:38.789501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.376 [2024-07-12 13:30:38.789591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.947 13:30:39 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.947 13:30:39 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:50.947 13:30:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2419657 00:04:50.947 13:30:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.947 13:30:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:51.207 [ 00:04:51.207 "spdk_get_version", 00:04:51.207 "rpc_get_methods", 00:04:51.207 "trace_get_info", 00:04:51.207 "trace_get_tpoint_group_mask", 00:04:51.207 "trace_disable_tpoint_group", 00:04:51.208 "trace_enable_tpoint_group", 00:04:51.208 "trace_clear_tpoint_mask", 00:04:51.208 "trace_set_tpoint_mask", 00:04:51.208 "vfu_tgt_set_base_path", 00:04:51.208 "framework_get_pci_devices", 00:04:51.208 "framework_get_config", 00:04:51.208 "framework_get_subsystems", 00:04:51.208 "keyring_get_keys", 00:04:51.208 "iobuf_get_stats", 00:04:51.208 "iobuf_set_options", 00:04:51.208 "sock_get_default_impl", 00:04:51.208 "sock_set_default_impl", 00:04:51.208 "sock_impl_set_options", 00:04:51.208 "sock_impl_get_options", 00:04:51.208 "vmd_rescan", 00:04:51.208 "vmd_remove_device", 00:04:51.208 "vmd_enable", 00:04:51.208 "accel_get_stats", 00:04:51.208 "accel_set_options", 00:04:51.208 "accel_set_driver", 00:04:51.208 "accel_crypto_key_destroy", 00:04:51.208 "accel_crypto_keys_get", 00:04:51.208 "accel_crypto_key_create", 00:04:51.208 "accel_assign_opc", 00:04:51.208 "accel_get_module_info", 00:04:51.208 "accel_get_opc_assignments", 00:04:51.208 "notify_get_notifications", 00:04:51.208 "notify_get_types", 00:04:51.208 "bdev_get_histogram", 00:04:51.208 "bdev_enable_histogram", 00:04:51.208 "bdev_set_qos_limit", 00:04:51.208 "bdev_set_qd_sampling_period", 00:04:51.208 "bdev_get_bdevs", 00:04:51.208 "bdev_reset_iostat", 00:04:51.208 "bdev_get_iostat", 00:04:51.208 "bdev_examine", 00:04:51.208 "bdev_wait_for_examine", 00:04:51.208 "bdev_set_options", 00:04:51.208 "scsi_get_devices", 00:04:51.208 "thread_set_cpumask", 00:04:51.208 "framework_get_governor", 00:04:51.208 "framework_get_scheduler", 00:04:51.208 "framework_set_scheduler", 00:04:51.208 "framework_get_reactors", 00:04:51.208 "thread_get_io_channels", 00:04:51.208 "thread_get_pollers", 00:04:51.208 "thread_get_stats", 00:04:51.208 "framework_monitor_context_switch", 00:04:51.208 "spdk_kill_instance", 00:04:51.208 "log_enable_timestamps", 00:04:51.208 "log_get_flags", 00:04:51.208 "log_clear_flag", 00:04:51.208 "log_set_flag", 00:04:51.208 "log_get_level", 00:04:51.208 "log_set_level", 00:04:51.208 "log_get_print_level", 00:04:51.208 "log_set_print_level", 00:04:51.208 "framework_enable_cpumask_locks", 00:04:51.208 "framework_disable_cpumask_locks", 00:04:51.208 "framework_wait_init", 00:04:51.208 "framework_start_init", 00:04:51.208 "virtio_blk_create_transport", 00:04:51.208 "virtio_blk_get_transports", 00:04:51.208 "vhost_controller_set_coalescing", 00:04:51.208 "vhost_get_controllers", 00:04:51.208 "vhost_delete_controller", 00:04:51.208 "vhost_create_blk_controller", 00:04:51.208 "vhost_scsi_controller_remove_target", 00:04:51.208 "vhost_scsi_controller_add_target", 00:04:51.208 "vhost_start_scsi_controller", 00:04:51.208 "vhost_create_scsi_controller", 00:04:51.208 "ublk_recover_disk", 00:04:51.208 "ublk_get_disks", 00:04:51.208 "ublk_stop_disk", 00:04:51.208 "ublk_start_disk", 00:04:51.208 "ublk_destroy_target", 00:04:51.208 "ublk_create_target", 00:04:51.208 "nbd_get_disks", 00:04:51.208 "nbd_stop_disk", 00:04:51.208 "nbd_start_disk", 00:04:51.208 "env_dpdk_get_mem_stats", 00:04:51.208 "nvmf_stop_mdns_prr", 00:04:51.208 "nvmf_publish_mdns_prr", 00:04:51.208 "nvmf_subsystem_get_listeners", 00:04:51.208 "nvmf_subsystem_get_qpairs", 00:04:51.208 "nvmf_subsystem_get_controllers", 00:04:51.208 "nvmf_get_stats", 00:04:51.208 "nvmf_get_transports", 00:04:51.208 "nvmf_create_transport", 00:04:51.208 "nvmf_get_targets", 00:04:51.208 "nvmf_delete_target", 00:04:51.208 "nvmf_create_target", 00:04:51.208 "nvmf_subsystem_allow_any_host", 00:04:51.208 "nvmf_subsystem_remove_host", 00:04:51.208 "nvmf_subsystem_add_host", 00:04:51.208 "nvmf_ns_remove_host", 00:04:51.208 "nvmf_ns_add_host", 00:04:51.208 "nvmf_subsystem_remove_ns", 00:04:51.208 "nvmf_subsystem_add_ns", 00:04:51.208 "nvmf_subsystem_listener_set_ana_state", 00:04:51.208 "nvmf_discovery_get_referrals", 00:04:51.208 "nvmf_discovery_remove_referral", 00:04:51.208 "nvmf_discovery_add_referral", 00:04:51.208 "nvmf_subsystem_remove_listener", 00:04:51.208 "nvmf_subsystem_add_listener", 00:04:51.208 "nvmf_delete_subsystem", 00:04:51.208 "nvmf_create_subsystem", 00:04:51.208 "nvmf_get_subsystems", 00:04:51.208 "nvmf_set_crdt", 00:04:51.208 "nvmf_set_config", 00:04:51.208 "nvmf_set_max_subsystems", 00:04:51.208 "iscsi_get_histogram", 00:04:51.208 "iscsi_enable_histogram", 00:04:51.208 "iscsi_set_options", 00:04:51.208 "iscsi_get_auth_groups", 00:04:51.208 "iscsi_auth_group_remove_secret", 00:04:51.208 "iscsi_auth_group_add_secret", 00:04:51.208 "iscsi_delete_auth_group", 00:04:51.208 "iscsi_create_auth_group", 00:04:51.208 "iscsi_set_discovery_auth", 00:04:51.208 "iscsi_get_options", 00:04:51.208 "iscsi_target_node_request_logout", 00:04:51.208 "iscsi_target_node_set_redirect", 00:04:51.208 "iscsi_target_node_set_auth", 00:04:51.208 "iscsi_target_node_add_lun", 00:04:51.208 "iscsi_get_stats", 00:04:51.208 "iscsi_get_connections", 00:04:51.208 "iscsi_portal_group_set_auth", 00:04:51.208 "iscsi_start_portal_group", 00:04:51.208 "iscsi_delete_portal_group", 00:04:51.208 "iscsi_create_portal_group", 00:04:51.208 "iscsi_get_portal_groups", 00:04:51.208 "iscsi_delete_target_node", 00:04:51.208 "iscsi_target_node_remove_pg_ig_maps", 00:04:51.208 "iscsi_target_node_add_pg_ig_maps", 00:04:51.208 "iscsi_create_target_node", 00:04:51.208 "iscsi_get_target_nodes", 00:04:51.208 "iscsi_delete_initiator_group", 00:04:51.208 "iscsi_initiator_group_remove_initiators", 00:04:51.208 "iscsi_initiator_group_add_initiators", 00:04:51.208 "iscsi_create_initiator_group", 00:04:51.208 "iscsi_get_initiator_groups", 00:04:51.208 "keyring_linux_set_options", 00:04:51.208 "keyring_file_remove_key", 00:04:51.208 "keyring_file_add_key", 00:04:51.208 "vfu_virtio_create_scsi_endpoint", 00:04:51.208 "vfu_virtio_scsi_remove_target", 00:04:51.208 "vfu_virtio_scsi_add_target", 00:04:51.208 "vfu_virtio_create_blk_endpoint", 00:04:51.208 "vfu_virtio_delete_endpoint", 00:04:51.208 "iaa_scan_accel_module", 00:04:51.208 "dsa_scan_accel_module", 00:04:51.208 "ioat_scan_accel_module", 00:04:51.208 "accel_error_inject_error", 00:04:51.208 "bdev_iscsi_delete", 00:04:51.208 "bdev_iscsi_create", 00:04:51.208 "bdev_iscsi_set_options", 00:04:51.208 "bdev_virtio_attach_controller", 00:04:51.208 "bdev_virtio_scsi_get_devices", 00:04:51.208 "bdev_virtio_detach_controller", 00:04:51.208 "bdev_virtio_blk_set_hotplug", 00:04:51.208 "bdev_ftl_set_property", 00:04:51.208 "bdev_ftl_get_properties", 00:04:51.208 "bdev_ftl_get_stats", 00:04:51.208 "bdev_ftl_unmap", 00:04:51.208 "bdev_ftl_unload", 00:04:51.208 "bdev_ftl_delete", 00:04:51.208 "bdev_ftl_load", 00:04:51.208 "bdev_ftl_create", 00:04:51.208 "bdev_aio_delete", 00:04:51.208 "bdev_aio_rescan", 00:04:51.208 "bdev_aio_create", 00:04:51.208 "blobfs_create", 00:04:51.208 "blobfs_detect", 00:04:51.208 "blobfs_set_cache_size", 00:04:51.208 "bdev_zone_block_delete", 00:04:51.208 "bdev_zone_block_create", 00:04:51.208 "bdev_delay_delete", 00:04:51.208 "bdev_delay_create", 00:04:51.208 "bdev_delay_update_latency", 00:04:51.208 "bdev_split_delete", 00:04:51.208 "bdev_split_create", 00:04:51.208 "bdev_error_inject_error", 00:04:51.208 "bdev_error_delete", 00:04:51.208 "bdev_error_create", 00:04:51.208 "bdev_raid_set_options", 00:04:51.208 "bdev_raid_remove_base_bdev", 00:04:51.208 "bdev_raid_add_base_bdev", 00:04:51.208 "bdev_raid_delete", 00:04:51.208 "bdev_raid_create", 00:04:51.208 "bdev_raid_get_bdevs", 00:04:51.208 "bdev_lvol_set_parent_bdev", 00:04:51.208 "bdev_lvol_set_parent", 00:04:51.208 "bdev_lvol_check_shallow_copy", 00:04:51.208 "bdev_lvol_start_shallow_copy", 00:04:51.208 "bdev_lvol_grow_lvstore", 00:04:51.208 "bdev_lvol_get_lvols", 00:04:51.208 "bdev_lvol_get_lvstores", 00:04:51.208 "bdev_lvol_delete", 00:04:51.208 "bdev_lvol_set_read_only", 00:04:51.208 "bdev_lvol_resize", 00:04:51.208 "bdev_lvol_decouple_parent", 00:04:51.208 "bdev_lvol_inflate", 00:04:51.208 "bdev_lvol_rename", 00:04:51.208 "bdev_lvol_clone_bdev", 00:04:51.208 "bdev_lvol_clone", 00:04:51.208 "bdev_lvol_snapshot", 00:04:51.208 "bdev_lvol_create", 00:04:51.208 "bdev_lvol_delete_lvstore", 00:04:51.208 "bdev_lvol_rename_lvstore", 00:04:51.208 "bdev_lvol_create_lvstore", 00:04:51.208 "bdev_passthru_delete", 00:04:51.208 "bdev_passthru_create", 00:04:51.208 "bdev_nvme_cuse_unregister", 00:04:51.208 "bdev_nvme_cuse_register", 00:04:51.208 "bdev_opal_new_user", 00:04:51.208 "bdev_opal_set_lock_state", 00:04:51.208 "bdev_opal_delete", 00:04:51.208 "bdev_opal_get_info", 00:04:51.208 "bdev_opal_create", 00:04:51.208 "bdev_nvme_opal_revert", 00:04:51.208 "bdev_nvme_opal_init", 00:04:51.208 "bdev_nvme_send_cmd", 00:04:51.208 "bdev_nvme_get_path_iostat", 00:04:51.208 "bdev_nvme_get_mdns_discovery_info", 00:04:51.208 "bdev_nvme_stop_mdns_discovery", 00:04:51.208 "bdev_nvme_start_mdns_discovery", 00:04:51.208 "bdev_nvme_set_multipath_policy", 00:04:51.208 "bdev_nvme_set_preferred_path", 00:04:51.208 "bdev_nvme_get_io_paths", 00:04:51.208 "bdev_nvme_remove_error_injection", 00:04:51.208 "bdev_nvme_add_error_injection", 00:04:51.208 "bdev_nvme_get_discovery_info", 00:04:51.208 "bdev_nvme_stop_discovery", 00:04:51.208 "bdev_nvme_start_discovery", 00:04:51.208 "bdev_nvme_get_controller_health_info", 00:04:51.208 "bdev_nvme_disable_controller", 00:04:51.208 "bdev_nvme_enable_controller", 00:04:51.208 "bdev_nvme_reset_controller", 00:04:51.208 "bdev_nvme_get_transport_statistics", 00:04:51.208 "bdev_nvme_apply_firmware", 00:04:51.208 "bdev_nvme_detach_controller", 00:04:51.208 "bdev_nvme_get_controllers", 00:04:51.208 "bdev_nvme_attach_controller", 00:04:51.208 "bdev_nvme_set_hotplug", 00:04:51.208 "bdev_nvme_set_options", 00:04:51.208 "bdev_null_resize", 00:04:51.208 "bdev_null_delete", 00:04:51.208 "bdev_null_create", 00:04:51.208 "bdev_malloc_delete", 00:04:51.208 "bdev_malloc_create" 00:04:51.208 ] 00:04:51.208 13:30:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:51.208 13:30:39 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.209 13:30:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:51.209 13:30:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2419489 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2419489 ']' 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2419489 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419489 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419489' 00:04:51.209 killing process with pid 2419489 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2419489 00:04:51.209 13:30:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2419489 00:04:51.469 00:04:51.469 real 0m1.368s 00:04:51.469 user 0m2.571s 00:04:51.469 sys 0m0.409s 00:04:51.469 13:30:39 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.469 13:30:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.469 ************************************ 00:04:51.469 END TEST spdkcli_tcp 00:04:51.469 ************************************ 00:04:51.469 13:30:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.469 13:30:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.469 13:30:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.469 13:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.469 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:04:51.469 ************************************ 00:04:51.469 START TEST dpdk_mem_utility 00:04:51.469 ************************************ 00:04:51.469 13:30:39 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.730 * Looking for test storage... 00:04:51.730 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:51.730 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:51.730 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2419895 00:04:51.730 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2419895 00:04:51.730 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2419895 ']' 00:04:51.730 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.730 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.730 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.730 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.730 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.730 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.730 [2024-07-12 13:30:40.095758] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:51.730 [2024-07-12 13:30:40.095856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419895 ] 00:04:51.730 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.730 [2024-07-12 13:30:40.159153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.730 [2024-07-12 13:30:40.228915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.300 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.300 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:52.300 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:52.300 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:52.300 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.300 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.560 { 00:04:52.560 "filename": "/tmp/spdk_mem_dump.txt" 00:04:52.560 } 00:04:52.560 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.560 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:52.560 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:52.560 1 heaps totaling size 814.000000 MiB 00:04:52.560 size: 814.000000 MiB heap id: 0 00:04:52.560 end heaps---------- 00:04:52.560 8 mempools totaling size 598.116089 MiB 00:04:52.560 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:52.560 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:52.560 size: 84.521057 MiB name: bdev_io_2419895 00:04:52.560 size: 51.011292 MiB name: evtpool_2419895 00:04:52.560 size: 50.003479 MiB name: msgpool_2419895 00:04:52.560 size: 21.763794 MiB name: PDU_Pool 00:04:52.560 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:52.560 size: 0.026123 MiB name: Session_Pool 00:04:52.560 end mempools------- 00:04:52.560 6 memzones totaling size 4.142822 MiB 00:04:52.560 size: 1.000366 MiB name: RG_ring_0_2419895 00:04:52.560 size: 1.000366 MiB name: RG_ring_1_2419895 00:04:52.560 size: 1.000366 MiB name: RG_ring_4_2419895 00:04:52.560 size: 1.000366 MiB name: RG_ring_5_2419895 00:04:52.560 size: 0.125366 MiB name: RG_ring_2_2419895 00:04:52.560 size: 0.015991 MiB name: RG_ring_3_2419895 00:04:52.560 end memzones------- 00:04:52.560 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:52.560 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:52.560 list of free elements. size: 12.519348 MiB 00:04:52.560 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:52.561 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:52.561 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:52.561 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:52.561 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:52.561 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:52.561 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:52.561 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:52.561 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:52.561 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:52.561 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:52.561 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:52.561 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:52.561 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:52.561 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:52.561 list of standard malloc elements. size: 199.218079 MiB 00:04:52.561 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:52.561 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:52.561 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:52.561 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:52.561 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:52.561 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:52.561 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:52.561 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:52.561 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:52.561 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:52.561 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:52.561 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:52.561 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:52.561 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:52.561 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:52.561 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:52.561 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:52.561 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:52.561 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:52.561 list of memzone associated elements. size: 602.262573 MiB 00:04:52.561 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:52.561 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:52.561 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:52.561 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:52.561 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:52.561 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2419895_0 00:04:52.561 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:52.561 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2419895_0 00:04:52.561 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:52.561 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2419895_0 00:04:52.561 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:52.561 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:52.561 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:52.561 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:52.561 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:52.561 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2419895 00:04:52.561 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:52.561 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2419895 00:04:52.561 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:52.561 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2419895 00:04:52.561 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:52.561 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:52.561 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:52.561 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:52.561 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:52.561 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:52.561 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:52.561 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:52.561 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:52.561 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2419895 00:04:52.561 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:52.561 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2419895 00:04:52.561 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:52.561 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2419895 00:04:52.561 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:52.561 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2419895 00:04:52.561 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:52.561 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2419895 00:04:52.561 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:52.561 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:52.561 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:52.561 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:52.561 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:52.561 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:52.561 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:52.561 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2419895 00:04:52.561 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:52.561 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:52.561 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:52.561 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:52.561 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:52.561 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2419895 00:04:52.561 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:52.561 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:52.561 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:52.561 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2419895 00:04:52.561 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:52.561 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2419895 00:04:52.561 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:52.561 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:52.561 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:52.561 13:30:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2419895 00:04:52.561 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2419895 ']' 00:04:52.561 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2419895 00:04:52.561 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:52.561 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.561 13:30:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419895 00:04:52.561 13:30:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.561 13:30:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.561 13:30:41 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419895' 00:04:52.561 killing process with pid 2419895 00:04:52.561 13:30:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2419895 00:04:52.561 13:30:41 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2419895 00:04:52.823 00:04:52.823 real 0m1.266s 00:04:52.823 user 0m1.333s 00:04:52.823 sys 0m0.354s 00:04:52.823 13:30:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.823 13:30:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.823 ************************************ 00:04:52.823 END TEST dpdk_mem_utility 00:04:52.823 ************************************ 00:04:52.823 13:30:41 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.823 13:30:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:52.823 13:30:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.823 13:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.823 13:30:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.823 ************************************ 00:04:52.823 START TEST event 00:04:52.823 ************************************ 00:04:52.823 13:30:41 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:52.823 * Looking for test storage... 00:04:53.083 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:53.083 13:30:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:53.083 13:30:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.083 13:30:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.083 13:30:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:53.083 13:30:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.083 13:30:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.083 ************************************ 00:04:53.083 START TEST event_perf 00:04:53.083 ************************************ 00:04:53.083 13:30:41 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.083 Running I/O for 1 seconds...[2024-07-12 13:30:41.470109] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:53.083 [2024-07-12 13:30:41.470218] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420282 ] 00:04:53.083 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.083 [2024-07-12 13:30:41.540542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.083 [2024-07-12 13:30:41.619972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.083 [2024-07-12 13:30:41.620090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.083 [2024-07-12 13:30:41.620263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.083 [2024-07-12 13:30:41.620263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.542 Running I/O for 1 seconds... 00:04:54.542 lcore 0: 163263 00:04:54.542 lcore 1: 163262 00:04:54.542 lcore 2: 163262 00:04:54.542 lcore 3: 163265 00:04:54.542 done. 00:04:54.542 00:04:54.542 real 0m1.217s 00:04:54.542 user 0m4.121s 00:04:54.542 sys 0m0.095s 00:04:54.542 13:30:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.542 13:30:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.542 ************************************ 00:04:54.542 END TEST event_perf 00:04:54.542 ************************************ 00:04:54.542 13:30:42 event -- common/autotest_common.sh@1142 -- # return 0 00:04:54.542 13:30:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:54.542 13:30:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:54.542 13:30:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.542 13:30:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.542 ************************************ 00:04:54.542 START TEST event_reactor 00:04:54.542 ************************************ 00:04:54.542 13:30:42 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:54.542 [2024-07-12 13:30:42.750286] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:54.542 [2024-07-12 13:30:42.750399] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420470 ] 00:04:54.542 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.542 [2024-07-12 13:30:42.816744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.542 [2024-07-12 13:30:42.884156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.483 test_start 00:04:55.483 oneshot 00:04:55.483 tick 100 00:04:55.483 tick 100 00:04:55.483 tick 250 00:04:55.483 tick 100 00:04:55.483 tick 100 00:04:55.483 tick 100 00:04:55.483 tick 250 00:04:55.483 tick 500 00:04:55.483 tick 100 00:04:55.483 tick 100 00:04:55.483 tick 250 00:04:55.483 tick 100 00:04:55.483 tick 100 00:04:55.483 test_end 00:04:55.483 00:04:55.483 real 0m1.199s 00:04:55.483 user 0m1.116s 00:04:55.483 sys 0m0.079s 00:04:55.483 13:30:43 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.483 13:30:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:55.483 ************************************ 00:04:55.483 END TEST event_reactor 00:04:55.483 ************************************ 00:04:55.483 13:30:43 event -- common/autotest_common.sh@1142 -- # return 0 00:04:55.483 13:30:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.483 13:30:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:55.483 13:30:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.483 13:30:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.483 ************************************ 00:04:55.483 START TEST event_reactor_perf 00:04:55.483 ************************************ 00:04:55.483 13:30:43 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.483 [2024-07-12 13:30:44.015624] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:55.483 [2024-07-12 13:30:44.015743] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420679 ] 00:04:55.484 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.744 [2024-07-12 13:30:44.081115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.744 [2024-07-12 13:30:44.150240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.688 test_start 00:04:56.688 test_end 00:04:56.688 Performance: 637164 events per second 00:04:56.688 00:04:56.688 real 0m1.201s 00:04:56.688 user 0m1.123s 00:04:56.688 sys 0m0.073s 00:04:56.688 13:30:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.688 13:30:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.688 ************************************ 00:04:56.688 END TEST event_reactor_perf 00:04:56.688 ************************************ 00:04:56.689 13:30:45 event -- common/autotest_common.sh@1142 -- # return 0 00:04:56.689 13:30:45 event -- event/event.sh@49 -- # uname -s 00:04:56.689 13:30:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:56.689 13:30:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.689 13:30:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.689 13:30:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.689 13:30:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.950 ************************************ 00:04:56.950 START TEST event_scheduler 00:04:56.950 ************************************ 00:04:56.950 13:30:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.950 * Looking for test storage... 00:04:56.950 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:56.950 13:30:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:56.950 13:30:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2421056 00:04:56.950 13:30:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.950 13:30:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:56.950 13:30:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2421056 00:04:56.950 13:30:45 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2421056 ']' 00:04:56.950 13:30:45 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.950 13:30:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.950 13:30:45 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.950 13:30:45 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.950 13:30:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.950 [2024-07-12 13:30:45.399111] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:04:56.950 [2024-07-12 13:30:45.399194] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421056 ] 00:04:56.950 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.950 [2024-07-12 13:30:45.461669] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.950 [2024-07-12 13:30:45.530429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.950 [2024-07-12 13:30:45.530583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.950 [2024-07-12 13:30:45.530734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.950 [2024-07-12 13:30:45.530736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:57.893 13:30:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 [2024-07-12 13:30:46.212897] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:57.893 [2024-07-12 13:30:46.212913] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:57.893 [2024-07-12 13:30:46.212921] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:57.893 [2024-07-12 13:30:46.212926] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:57.893 [2024-07-12 13:30:46.212930] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 [2024-07-12 13:30:46.270776] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 ************************************ 00:04:57.893 START TEST scheduler_create_thread 00:04:57.893 ************************************ 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 2 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 3 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 4 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 5 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 6 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 7 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 8 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.893 9 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.893 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.464 10 00:04:58.464 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.464 13:30:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:58.464 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.464 13:30:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.848 13:30:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.848 13:30:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.848 13:30:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.848 13:30:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.848 13:30:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.789 13:30:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.789 13:30:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:00.789 13:30:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.789 13:30:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.360 13:30:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.360 13:30:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.360 13:30:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.360 13:30:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.360 13:30:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.300 13:30:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.300 00:05:02.300 real 0m4.223s 00:05:02.300 user 0m0.026s 00:05:02.300 sys 0m0.005s 00:05:02.300 13:30:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.300 13:30:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.300 ************************************ 00:05:02.300 END TEST scheduler_create_thread 00:05:02.300 ************************************ 00:05:02.300 13:30:50 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:02.300 13:30:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:02.300 13:30:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2421056 00:05:02.300 13:30:50 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2421056 ']' 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2421056 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2421056 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2421056' 00:05:02.301 killing process with pid 2421056 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2421056 00:05:02.301 13:30:50 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2421056 00:05:02.301 [2024-07-12 13:30:50.812032] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:02.561 00:05:02.561 real 0m5.697s 00:05:02.561 user 0m12.809s 00:05:02.561 sys 0m0.339s 00:05:02.561 13:30:50 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.561 13:30:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.561 ************************************ 00:05:02.561 END TEST event_scheduler 00:05:02.561 ************************************ 00:05:02.561 13:30:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:02.561 13:30:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:02.561 13:30:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:02.561 13:30:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.561 13:30:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.561 13:30:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.561 ************************************ 00:05:02.561 START TEST app_repeat 00:05:02.561 ************************************ 00:05:02.561 13:30:51 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2422208 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2422208' 00:05:02.561 Process app_repeat pid: 2422208 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:02.561 spdk_app_start Round 0 00:05:02.561 13:30:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2422208 /var/tmp/spdk-nbd.sock 00:05:02.561 13:30:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2422208 ']' 00:05:02.561 13:30:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.561 13:30:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.561 13:30:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.561 13:30:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.561 13:30:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.561 [2024-07-12 13:30:51.077086] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:02.561 [2024-07-12 13:30:51.077195] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422208 ] 00:05:02.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.822 [2024-07-12 13:30:51.146183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.822 [2024-07-12 13:30:51.220659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.822 [2024-07-12 13:30:51.220661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.393 13:30:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.393 13:30:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:03.393 13:30:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.677 Malloc0 00:05:03.677 13:30:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.677 Malloc1 00:05:03.677 13:30:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.677 13:30:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.937 /dev/nbd0 00:05:03.937 13:30:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.937 13:30:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.937 1+0 records in 00:05:03.937 1+0 records out 00:05:03.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215741 s, 19.0 MB/s 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:03.937 13:30:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:03.937 13:30:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.937 13:30:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.937 13:30:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.198 /dev/nbd1 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.198 1+0 records in 00:05:04.198 1+0 records out 00:05:04.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330328 s, 12.4 MB/s 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:04.198 13:30:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.198 { 00:05:04.198 "nbd_device": "/dev/nbd0", 00:05:04.198 "bdev_name": "Malloc0" 00:05:04.198 }, 00:05:04.198 { 00:05:04.198 "nbd_device": "/dev/nbd1", 00:05:04.198 "bdev_name": "Malloc1" 00:05:04.198 } 00:05:04.198 ]' 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.198 { 00:05:04.198 "nbd_device": "/dev/nbd0", 00:05:04.198 "bdev_name": "Malloc0" 00:05:04.198 }, 00:05:04.198 { 00:05:04.198 "nbd_device": "/dev/nbd1", 00:05:04.198 "bdev_name": "Malloc1" 00:05:04.198 } 00:05:04.198 ]' 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.198 /dev/nbd1' 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.198 /dev/nbd1' 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.198 13:30:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.459 256+0 records in 00:05:04.459 256+0 records out 00:05:04.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118918 s, 88.2 MB/s 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.459 256+0 records in 00:05:04.459 256+0 records out 00:05:04.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299633 s, 35.0 MB/s 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.459 256+0 records in 00:05:04.459 256+0 records out 00:05:04.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189361 s, 55.4 MB/s 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.459 13:30:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.459 13:30:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.718 13:30:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.978 13:30:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.978 13:30:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.237 13:30:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.237 [2024-07-12 13:30:53.719407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.237 [2024-07-12 13:30:53.785095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.237 [2024-07-12 13:30:53.785097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.237 [2024-07-12 13:30:53.814389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.237 [2024-07-12 13:30:53.814425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.532 13:30:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.532 13:30:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:08.532 spdk_app_start Round 1 00:05:08.532 13:30:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2422208 /var/tmp/spdk-nbd.sock 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2422208 ']' 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.532 13:30:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:08.532 13:30:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.532 Malloc0 00:05:08.532 13:30:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.532 Malloc1 00:05:08.532 13:30:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.532 13:30:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.792 /dev/nbd0 00:05:08.792 13:30:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.792 13:30:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.792 1+0 records in 00:05:08.792 1+0 records out 00:05:08.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270418 s, 15.1 MB/s 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.792 13:30:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.792 13:30:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.792 13:30:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.792 13:30:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.052 /dev/nbd1 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.052 1+0 records in 00:05:09.052 1+0 records out 00:05:09.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022794 s, 18.0 MB/s 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:09.052 13:30:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.052 { 00:05:09.052 "nbd_device": "/dev/nbd0", 00:05:09.052 "bdev_name": "Malloc0" 00:05:09.052 }, 00:05:09.052 { 00:05:09.052 "nbd_device": "/dev/nbd1", 00:05:09.052 "bdev_name": "Malloc1" 00:05:09.052 } 00:05:09.052 ]' 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.052 { 00:05:09.052 "nbd_device": "/dev/nbd0", 00:05:09.052 "bdev_name": "Malloc0" 00:05:09.052 }, 00:05:09.052 { 00:05:09.052 "nbd_device": "/dev/nbd1", 00:05:09.052 "bdev_name": "Malloc1" 00:05:09.052 } 00:05:09.052 ]' 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.052 /dev/nbd1' 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.052 /dev/nbd1' 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.052 13:30:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.312 256+0 records in 00:05:09.313 256+0 records out 00:05:09.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124924 s, 83.9 MB/s 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.313 256+0 records in 00:05:09.313 256+0 records out 00:05:09.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018318 s, 57.2 MB/s 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.313 256+0 records in 00:05:09.313 256+0 records out 00:05:09.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170725 s, 61.4 MB/s 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.313 13:30:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.573 13:30:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.833 13:30:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.833 13:30:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.833 13:30:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.093 [2024-07-12 13:30:58.541453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.093 [2024-07-12 13:30:58.606454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.093 [2024-07-12 13:30:58.606457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.093 [2024-07-12 13:30:58.636567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.093 [2024-07-12 13:30:58.636604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.391 13:31:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.391 13:31:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:13.391 spdk_app_start Round 2 00:05:13.391 13:31:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2422208 /var/tmp/spdk-nbd.sock 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2422208 ']' 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.391 13:31:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:13.391 13:31:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.391 Malloc0 00:05:13.391 13:31:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.391 Malloc1 00:05:13.391 13:31:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.391 13:31:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.391 13:31:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.392 13:31:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.652 /dev/nbd0 00:05:13.652 13:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.652 13:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.652 1+0 records in 00:05:13.652 1+0 records out 00:05:13.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243804 s, 16.8 MB/s 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.652 13:31:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.652 13:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.652 13:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.652 13:31:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.652 /dev/nbd1 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.913 1+0 records in 00:05:13.913 1+0 records out 00:05:13.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245033 s, 16.7 MB/s 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.913 13:31:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.913 { 00:05:13.913 "nbd_device": "/dev/nbd0", 00:05:13.913 "bdev_name": "Malloc0" 00:05:13.913 }, 00:05:13.913 { 00:05:13.913 "nbd_device": "/dev/nbd1", 00:05:13.913 "bdev_name": "Malloc1" 00:05:13.913 } 00:05:13.913 ]' 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.913 { 00:05:13.913 "nbd_device": "/dev/nbd0", 00:05:13.913 "bdev_name": "Malloc0" 00:05:13.913 }, 00:05:13.913 { 00:05:13.913 "nbd_device": "/dev/nbd1", 00:05:13.913 "bdev_name": "Malloc1" 00:05:13.913 } 00:05:13.913 ]' 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.913 /dev/nbd1' 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.913 /dev/nbd1' 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.913 13:31:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.174 256+0 records in 00:05:14.174 256+0 records out 00:05:14.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122897 s, 85.3 MB/s 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.174 256+0 records in 00:05:14.174 256+0 records out 00:05:14.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157637 s, 66.5 MB/s 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.174 256+0 records in 00:05:14.174 256+0 records out 00:05:14.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178564 s, 58.7 MB/s 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.174 13:31:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.434 13:31:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.693 13:31:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.693 13:31:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.693 13:31:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.953 [2024-07-12 13:31:03.389545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.953 [2024-07-12 13:31:03.455709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.953 [2024-07-12 13:31:03.455711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.953 [2024-07-12 13:31:03.485006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.953 [2024-07-12 13:31:03.485044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.248 13:31:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2422208 /var/tmp/spdk-nbd.sock 00:05:18.248 13:31:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2422208 ']' 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:18.249 13:31:06 event.app_repeat -- event/event.sh@39 -- # killprocess 2422208 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2422208 ']' 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2422208 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2422208 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2422208' 00:05:18.249 killing process with pid 2422208 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2422208 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2422208 00:05:18.249 spdk_app_start is called in Round 0. 00:05:18.249 Shutdown signal received, stop current app iteration 00:05:18.249 Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 reinitialization... 00:05:18.249 spdk_app_start is called in Round 1. 00:05:18.249 Shutdown signal received, stop current app iteration 00:05:18.249 Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 reinitialization... 00:05:18.249 spdk_app_start is called in Round 2. 00:05:18.249 Shutdown signal received, stop current app iteration 00:05:18.249 Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 reinitialization... 00:05:18.249 spdk_app_start is called in Round 3. 00:05:18.249 Shutdown signal received, stop current app iteration 00:05:18.249 13:31:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.249 13:31:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:18.249 00:05:18.249 real 0m15.529s 00:05:18.249 user 0m33.295s 00:05:18.249 sys 0m2.368s 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.249 13:31:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.249 ************************************ 00:05:18.249 END TEST app_repeat 00:05:18.249 ************************************ 00:05:18.249 13:31:06 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.249 13:31:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.249 13:31:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.249 13:31:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.249 13:31:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.249 13:31:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.249 ************************************ 00:05:18.249 START TEST cpu_locks 00:05:18.249 ************************************ 00:05:18.249 13:31:06 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.249 * Looking for test storage... 00:05:18.249 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:18.249 13:31:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.249 13:31:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.249 13:31:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.249 13:31:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.249 13:31:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.249 13:31:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.249 13:31:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.249 ************************************ 00:05:18.249 START TEST default_locks 00:05:18.249 ************************************ 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2425691 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2425691 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2425691 ']' 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.249 13:31:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.249 [2024-07-12 13:31:06.825259] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:18.249 [2024-07-12 13:31:06.825337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425691 ] 00:05:18.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.509 [2024-07-12 13:31:06.891769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.509 [2024-07-12 13:31:06.968112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.076 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.076 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:19.076 13:31:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2425691 00:05:19.076 13:31:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2425691 00:05:19.076 13:31:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.347 lslocks: write error 00:05:19.347 13:31:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2425691 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2425691 ']' 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2425691 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2425691 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2425691' 00:05:19.348 killing process with pid 2425691 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2425691 00:05:19.348 13:31:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2425691 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2425691 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2425691 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2425691 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2425691 ']' 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.610 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2425691) - No such process 00:05:19.610 ERROR: process (pid: 2425691) is no longer running 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.610 00:05:19.610 real 0m1.226s 00:05:19.610 user 0m1.297s 00:05:19.610 sys 0m0.398s 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.610 13:31:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.610 ************************************ 00:05:19.610 END TEST default_locks 00:05:19.610 ************************************ 00:05:19.610 13:31:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:19.610 13:31:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:19.610 13:31:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.610 13:31:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.610 13:31:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.610 ************************************ 00:05:19.610 START TEST default_locks_via_rpc 00:05:19.610 ************************************ 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2425908 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2425908 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2425908 ']' 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.610 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.610 [2024-07-12 13:31:08.124476] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:19.610 [2024-07-12 13:31:08.124547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425908 ] 00:05:19.610 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.610 [2024-07-12 13:31:08.186990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.869 [2024-07-12 13:31:08.254522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.439 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.439 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:20.439 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.439 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.439 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2425908 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2425908 00:05:20.440 13:31:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2425908 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2425908 ']' 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2425908 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2425908 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2425908' 00:05:21.011 killing process with pid 2425908 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2425908 00:05:21.011 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2425908 00:05:21.272 00:05:21.272 real 0m1.545s 00:05:21.272 user 0m1.657s 00:05:21.272 sys 0m0.487s 00:05:21.272 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.272 13:31:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.272 ************************************ 00:05:21.272 END TEST default_locks_via_rpc 00:05:21.272 ************************************ 00:05:21.272 13:31:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:21.272 13:31:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:21.272 13:31:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.272 13:31:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.272 13:31:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.272 ************************************ 00:05:21.272 START TEST non_locking_app_on_locked_coremask 00:05:21.272 ************************************ 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2426250 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2426250 /var/tmp/spdk.sock 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2426250 ']' 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.272 13:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.272 [2024-07-12 13:31:09.746422] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:21.272 [2024-07-12 13:31:09.746500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426250 ] 00:05:21.272 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.272 [2024-07-12 13:31:09.812519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.534 [2024-07-12 13:31:09.886090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.105 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2426439 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2426439 /var/tmp/spdk2.sock 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2426439 ']' 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.106 13:31:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.106 [2024-07-12 13:31:10.555820] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:22.106 [2024-07-12 13:31:10.555914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426439 ] 00:05:22.106 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.106 [2024-07-12 13:31:10.643402] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.106 [2024-07-12 13:31:10.643429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.367 [2024-07-12 13:31:10.778594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.937 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.937 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:22.937 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2426250 00:05:22.937 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2426250 00:05:22.937 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.508 lslocks: write error 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2426250 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2426250 ']' 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2426250 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2426250 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2426250' 00:05:23.508 killing process with pid 2426250 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2426250 00:05:23.508 13:31:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2426250 00:05:23.768 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2426439 00:05:23.768 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2426439 ']' 00:05:23.768 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2426439 00:05:23.768 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.768 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.768 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2426439 00:05:24.028 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.028 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.028 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2426439' 00:05:24.028 killing process with pid 2426439 00:05:24.028 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2426439 00:05:24.028 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2426439 00:05:24.028 00:05:24.028 real 0m2.851s 00:05:24.028 user 0m3.083s 00:05:24.028 sys 0m0.837s 00:05:24.028 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.028 13:31:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.028 ************************************ 00:05:24.028 END TEST non_locking_app_on_locked_coremask 00:05:24.028 ************************************ 00:05:24.028 13:31:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.028 13:31:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.028 13:31:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.028 13:31:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.028 13:31:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.289 ************************************ 00:05:24.289 START TEST locking_app_on_unlocked_coremask 00:05:24.289 ************************************ 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2426817 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2426817 /var/tmp/spdk.sock 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2426817 ']' 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.289 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.289 [2024-07-12 13:31:12.670449] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:24.289 [2024-07-12 13:31:12.670531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426817 ] 00:05:24.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.289 [2024-07-12 13:31:12.735177] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.289 [2024-07-12 13:31:12.735206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.289 [2024-07-12 13:31:12.801240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2427146 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2427146 /var/tmp/spdk2.sock 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2427146 ']' 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.229 13:31:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.229 [2024-07-12 13:31:13.490517] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:25.229 [2024-07-12 13:31:13.490583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427146 ] 00:05:25.229 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.229 [2024-07-12 13:31:13.581498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.229 [2024-07-12 13:31:13.713057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.798 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.798 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.798 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2427146 00:05:25.798 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2427146 00:05:25.798 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.368 lslocks: write error 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2426817 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2426817 ']' 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2426817 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2426817 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2426817' 00:05:26.368 killing process with pid 2426817 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2426817 00:05:26.368 13:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2426817 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2427146 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2427146 ']' 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2427146 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427146 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427146' 00:05:26.679 killing process with pid 2427146 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2427146 00:05:26.679 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2427146 00:05:26.969 00:05:26.969 real 0m2.766s 00:05:26.969 user 0m3.015s 00:05:26.969 sys 0m0.797s 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.969 ************************************ 00:05:26.969 END TEST locking_app_on_unlocked_coremask 00:05:26.969 ************************************ 00:05:26.969 13:31:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:26.969 13:31:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:26.969 13:31:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.969 13:31:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.969 13:31:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.969 ************************************ 00:05:26.969 START TEST locking_app_on_locked_coremask 00:05:26.969 ************************************ 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2427524 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2427524 /var/tmp/spdk.sock 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2427524 ']' 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.969 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.969 [2024-07-12 13:31:15.507773] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:26.969 [2024-07-12 13:31:15.507847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427524 ] 00:05:26.969 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.229 [2024-07-12 13:31:15.569666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.229 [2024-07-12 13:31:15.636343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2427564 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2427564 /var/tmp/spdk2.sock 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2427564 /var/tmp/spdk2.sock 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2427564 /var/tmp/spdk2.sock 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2427564 ']' 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.801 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.801 [2024-07-12 13:31:16.310115] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:27.801 [2024-07-12 13:31:16.310212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427564 ] 00:05:27.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.062 [2024-07-12 13:31:16.397884] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2427524 has claimed it. 00:05:28.062 [2024-07-12 13:31:16.397924] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.633 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2427564) - No such process 00:05:28.633 ERROR: process (pid: 2427564) is no longer running 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2427524 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2427524 00:05:28.633 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.894 lslocks: write error 00:05:28.894 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2427524 00:05:28.894 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2427524 ']' 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2427524 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427524 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427524' 00:05:28.895 killing process with pid 2427524 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2427524 00:05:28.895 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2427524 00:05:29.156 00:05:29.156 real 0m2.107s 00:05:29.156 user 0m2.312s 00:05:29.156 sys 0m0.581s 00:05:29.156 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.156 13:31:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.156 ************************************ 00:05:29.156 END TEST locking_app_on_locked_coremask 00:05:29.156 ************************************ 00:05:29.156 13:31:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:29.156 13:31:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:29.156 13:31:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.156 13:31:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.156 13:31:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.156 ************************************ 00:05:29.156 START TEST locking_overlapped_coremask 00:05:29.156 ************************************ 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2427902 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2427902 /var/tmp/spdk.sock 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2427902 ']' 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.156 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:29.156 [2024-07-12 13:31:17.684708] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:29.156 [2024-07-12 13:31:17.684787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427902 ] 00:05:29.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.416 [2024-07-12 13:31:17.748188] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.416 [2024-07-12 13:31:17.817882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.416 [2024-07-12 13:31:17.817995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.416 [2024-07-12 13:31:17.817998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2428185 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2428185 /var/tmp/spdk2.sock 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2428185 /var/tmp/spdk2.sock 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2428185 /var/tmp/spdk2.sock 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2428185 ']' 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.991 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.991 [2024-07-12 13:31:18.484213] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:29.991 [2024-07-12 13:31:18.484321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428185 ] 00:05:29.991 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.991 [2024-07-12 13:31:18.558981] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2427902 has claimed it. 00:05:29.991 [2024-07-12 13:31:18.559012] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.562 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2428185) - No such process 00:05:30.562 ERROR: process (pid: 2428185) is no longer running 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2427902 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2427902 ']' 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2427902 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.562 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427902 00:05:30.823 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.823 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.823 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427902' 00:05:30.823 killing process with pid 2427902 00:05:30.823 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2427902 00:05:30.823 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2427902 00:05:30.823 00:05:30.823 real 0m1.707s 00:05:30.823 user 0m4.855s 00:05:30.823 sys 0m0.357s 00:05:30.823 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.823 13:31:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.823 ************************************ 00:05:30.823 END TEST locking_overlapped_coremask 00:05:30.823 ************************************ 00:05:30.823 13:31:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:30.823 13:31:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:30.823 13:31:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.823 13:31:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.823 13:31:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.084 ************************************ 00:05:31.084 START TEST locking_overlapped_coremask_via_rpc 00:05:31.084 ************************************ 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2428278 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2428278 /var/tmp/spdk.sock 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2428278 ']' 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.084 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.084 [2024-07-12 13:31:19.463092] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:31.084 [2024-07-12 13:31:19.463179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428278 ] 00:05:31.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.084 [2024-07-12 13:31:19.525542] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.084 [2024-07-12 13:31:19.525576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.084 [2024-07-12 13:31:19.593504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.084 [2024-07-12 13:31:19.593618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.084 [2024-07-12 13:31:19.593620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2428604 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2428604 /var/tmp/spdk2.sock 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2428604 ']' 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.025 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.025 [2024-07-12 13:31:20.286834] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:32.025 [2024-07-12 13:31:20.286922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428604 ] 00:05:32.025 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.025 [2024-07-12 13:31:20.364161] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.025 [2024-07-12 13:31:20.364190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.025 [2024-07-12 13:31:20.474652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.025 [2024-07-12 13:31:20.474811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.025 [2024-07-12 13:31:20.474814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.595 [2024-07-12 13:31:21.090284] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2428278 has claimed it. 00:05:32.595 request: 00:05:32.595 { 00:05:32.595 "method": "framework_enable_cpumask_locks", 00:05:32.595 "req_id": 1 00:05:32.595 } 00:05:32.595 Got JSON-RPC error response 00:05:32.595 response: 00:05:32.595 { 00:05:32.595 "code": -32603, 00:05:32.595 "message": "Failed to claim CPU core: 2" 00:05:32.595 } 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2428278 /var/tmp/spdk.sock 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2428278 ']' 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.595 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2428604 /var/tmp/spdk2.sock 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2428604 ']' 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.856 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.116 00:05:33.116 real 0m2.002s 00:05:33.116 user 0m0.765s 00:05:33.116 sys 0m0.162s 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.116 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.116 ************************************ 00:05:33.116 END TEST locking_overlapped_coremask_via_rpc 00:05:33.116 ************************************ 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:33.116 13:31:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:33.116 13:31:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2428278 ]] 00:05:33.116 13:31:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2428278 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2428278 ']' 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2428278 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2428278 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2428278' 00:05:33.116 killing process with pid 2428278 00:05:33.116 13:31:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2428278 00:05:33.117 13:31:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2428278 00:05:33.377 13:31:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2428604 ]] 00:05:33.377 13:31:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2428604 00:05:33.377 13:31:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2428604 ']' 00:05:33.377 13:31:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2428604 00:05:33.377 13:31:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:33.377 13:31:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.378 13:31:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2428604 00:05:33.378 13:31:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:33.378 13:31:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:33.378 13:31:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2428604' 00:05:33.378 killing process with pid 2428604 00:05:33.378 13:31:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2428604 00:05:33.378 13:31:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2428604 00:05:33.638 13:31:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.638 13:31:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:33.638 13:31:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2428278 ]] 00:05:33.638 13:31:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2428278 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2428278 ']' 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2428278 00:05:33.638 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2428278) - No such process 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2428278 is not found' 00:05:33.638 Process with pid 2428278 is not found 00:05:33.638 13:31:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2428604 ]] 00:05:33.638 13:31:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2428604 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2428604 ']' 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2428604 00:05:33.638 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2428604) - No such process 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2428604 is not found' 00:05:33.638 Process with pid 2428604 is not found 00:05:33.638 13:31:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:33.638 00:05:33.638 real 0m15.340s 00:05:33.638 user 0m26.635s 00:05:33.638 sys 0m4.487s 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.638 13:31:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.638 ************************************ 00:05:33.638 END TEST cpu_locks 00:05:33.638 ************************************ 00:05:33.638 13:31:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:33.638 00:05:33.638 real 0m40.719s 00:05:33.638 user 1m19.308s 00:05:33.638 sys 0m7.798s 00:05:33.638 13:31:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.638 13:31:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.638 ************************************ 00:05:33.638 END TEST event 00:05:33.638 ************************************ 00:05:33.638 13:31:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.638 13:31:22 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:33.638 13:31:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.638 13:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.638 13:31:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.638 ************************************ 00:05:33.638 START TEST thread 00:05:33.638 ************************************ 00:05:33.638 13:31:22 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:33.638 * Looking for test storage... 00:05:33.639 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:33.639 13:31:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.639 13:31:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:33.639 13:31:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.639 13:31:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.899 ************************************ 00:05:33.899 START TEST thread_poller_perf 00:05:33.899 ************************************ 00:05:33.899 13:31:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.899 [2024-07-12 13:31:22.255830] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:33.899 [2024-07-12 13:31:22.255936] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429040 ] 00:05:33.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.899 [2024-07-12 13:31:22.322867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.899 [2024-07-12 13:31:22.390820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.899 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.282 ====================================== 00:05:35.282 busy:2406651386 (cyc) 00:05:35.282 total_run_count: 582000 00:05:35.282 tsc_hz: 2400000000 (cyc) 00:05:35.282 ====================================== 00:05:35.282 poller_cost: 4135 (cyc), 1722 (nsec) 00:05:35.282 00:05:35.282 real 0m1.204s 00:05:35.282 user 0m1.119s 00:05:35.282 sys 0m0.081s 00:05:35.282 13:31:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.282 13:31:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.282 ************************************ 00:05:35.282 END TEST thread_poller_perf 00:05:35.282 ************************************ 00:05:35.282 13:31:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:35.282 13:31:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.282 13:31:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:35.282 13:31:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.282 13:31:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.282 ************************************ 00:05:35.282 START TEST thread_poller_perf 00:05:35.282 ************************************ 00:05:35.282 13:31:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:35.282 [2024-07-12 13:31:23.537940] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:35.282 [2024-07-12 13:31:23.538038] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429363 ] 00:05:35.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.282 [2024-07-12 13:31:23.602366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.282 [2024-07-12 13:31:23.668195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.282 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.223 ====================================== 00:05:36.223 busy:2401228444 (cyc) 00:05:36.223 total_run_count: 11311000 00:05:36.223 tsc_hz: 2400000000 (cyc) 00:05:36.223 ====================================== 00:05:36.223 poller_cost: 212 (cyc), 88 (nsec) 00:05:36.223 00:05:36.223 real 0m1.195s 00:05:36.223 user 0m1.111s 00:05:36.223 sys 0m0.081s 00:05:36.223 13:31:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.223 13:31:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 ************************************ 00:05:36.224 END TEST thread_poller_perf 00:05:36.224 ************************************ 00:05:36.224 13:31:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:36.224 13:31:24 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:36.224 13:31:24 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:36.224 13:31:24 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.224 13:31:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.224 13:31:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.224 ************************************ 00:05:36.224 START TEST thread_spdk_lock 00:05:36.224 ************************************ 00:05:36.224 13:31:24 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:36.485 [2024-07-12 13:31:24.809614] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:36.485 [2024-07-12 13:31:24.809736] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429494 ] 00:05:36.485 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.485 [2024-07-12 13:31:24.880216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.485 [2024-07-12 13:31:24.953802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.485 [2024-07-12 13:31:24.953805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.056 [2024-07-12 13:31:25.429473] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:37.056 [2024-07-12 13:31:25.429509] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:37.056 [2024-07-12 13:31:25.429518] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14ceec0 00:05:37.056 [2024-07-12 13:31:25.430377] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:37.056 [2024-07-12 13:31:25.430480] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:37.056 [2024-07-12 13:31:25.430495] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:37.056 Starting test contend 00:05:37.056 Worker Delay Wait us Hold us Total us 00:05:37.056 0 3 210339 177536 387876 00:05:37.056 1 5 132493 276327 408821 00:05:37.056 PASS test contend 00:05:37.056 Starting test hold_by_poller 00:05:37.056 PASS test hold_by_poller 00:05:37.057 Starting test hold_by_message 00:05:37.057 PASS test hold_by_message 00:05:37.057 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:37.057 100014 assertions passed 00:05:37.057 0 assertions failed 00:05:37.057 00:05:37.057 real 0m0.684s 00:05:37.057 user 0m1.072s 00:05:37.057 sys 0m0.084s 00:05:37.057 13:31:25 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.057 13:31:25 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:37.057 ************************************ 00:05:37.057 END TEST thread_spdk_lock 00:05:37.057 ************************************ 00:05:37.057 13:31:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:37.057 00:05:37.057 real 0m3.406s 00:05:37.057 user 0m3.417s 00:05:37.057 sys 0m0.472s 00:05:37.057 13:31:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.057 13:31:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.057 ************************************ 00:05:37.057 END TEST thread 00:05:37.057 ************************************ 00:05:37.057 13:31:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.057 13:31:25 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:37.057 13:31:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.057 13:31:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.057 13:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.057 ************************************ 00:05:37.057 START TEST accel 00:05:37.057 ************************************ 00:05:37.057 13:31:25 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:37.318 * Looking for test storage... 00:05:37.318 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:37.318 13:31:25 accel -- accel/accel.sh@95 -- # declare -A expected_opcs 00:05:37.318 13:31:25 accel -- accel/accel.sh@96 -- # get_expected_opcs 00:05:37.318 13:31:25 accel -- accel/accel.sh@69 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.318 13:31:25 accel -- accel/accel.sh@71 -- # spdk_tgt_pid=2429822 00:05:37.318 13:31:25 accel -- accel/accel.sh@72 -- # waitforlisten 2429822 00:05:37.318 13:31:25 accel -- common/autotest_common.sh@829 -- # '[' -z 2429822 ']' 00:05:37.318 13:31:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.318 13:31:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.318 13:31:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.318 13:31:25 accel -- accel/accel.sh@70 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:37.318 13:31:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.318 13:31:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.318 13:31:25 accel -- accel/accel.sh@70 -- # build_accel_config 00:05:37.318 13:31:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.318 13:31:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.318 13:31:25 accel -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:37.318 13:31:25 accel -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:37.318 13:31:25 accel -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:37.318 13:31:25 accel -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:37.318 13:31:25 accel -- accel/accel.sh@49 -- # local IFS=, 00:05:37.318 13:31:25 accel -- accel/accel.sh@50 -- # jq -r . 00:05:37.318 [2024-07-12 13:31:25.716088] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:37.318 [2024-07-12 13:31:25.716171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429822 ] 00:05:37.318 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.318 [2024-07-12 13:31:25.783116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.318 [2024-07-12 13:31:25.859761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.261 13:31:26 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.261 13:31:26 accel -- common/autotest_common.sh@862 -- # return 0 00:05:38.261 13:31:26 accel -- accel/accel.sh@74 -- # [[ 0 -gt 0 ]] 00:05:38.261 13:31:26 accel -- accel/accel.sh@77 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:38.261 13:31:26 accel -- accel/accel.sh@78 -- # [[ 0 -gt 0 ]] 00:05:38.261 13:31:26 accel -- accel/accel.sh@81 -- # [[ 0 -gt 0 ]] 00:05:38.261 13:31:26 accel -- accel/accel.sh@82 -- # [[ -n '' ]] 00:05:38.261 13:31:26 accel -- accel/accel.sh@84 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:38.261 13:31:26 accel -- accel/accel.sh@84 -- # rpc_cmd accel_get_opc_assignments 00:05:38.261 13:31:26 accel -- accel/accel.sh@84 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:38.261 13:31:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.261 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.261 13:31:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.261 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.261 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.261 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.262 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.262 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.262 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.262 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.262 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.262 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.262 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.262 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.262 13:31:26 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:38.262 13:31:26 accel -- accel/accel.sh@86 -- # IFS== 00:05:38.262 13:31:26 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:38.262 13:31:26 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:38.262 13:31:26 accel -- accel/accel.sh@89 -- # killprocess 2429822 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@948 -- # '[' -z 2429822 ']' 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@952 -- # kill -0 2429822 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@953 -- # uname 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2429822 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2429822' 00:05:38.262 killing process with pid 2429822 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@967 -- # kill 2429822 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@972 -- # wait 2429822 00:05:38.262 13:31:26 accel -- accel/accel.sh@90 -- # trap - ERR 00:05:38.262 13:31:26 accel -- accel/accel.sh@103 -- # run_test accel_help accel_perf -h 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.262 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.524 13:31:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@49 -- # local IFS=, 00:05:38.524 13:31:26 accel.accel_help -- accel/accel.sh@50 -- # jq -r . 00:05:38.524 13:31:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.524 13:31:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:38.524 13:31:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.524 13:31:26 accel -- accel/accel.sh@105 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:38.524 13:31:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:38.524 13:31:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.524 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.524 ************************************ 00:05:38.524 START TEST accel_missing_filename 00:05:38.524 ************************************ 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.524 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@49 -- # local IFS=, 00:05:38.524 13:31:26 accel.accel_missing_filename -- accel/accel.sh@50 -- # jq -r . 00:05:38.524 [2024-07-12 13:31:26.955864] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:38.524 [2024-07-12 13:31:26.955954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430173 ] 00:05:38.524 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.524 [2024-07-12 13:31:27.022829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.524 [2024-07-12 13:31:27.093960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.785 [2024-07-12 13:31:27.123888] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.785 [2024-07-12 13:31:27.160807] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:38.785 A filename is required. 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.785 00:05:38.785 real 0m0.278s 00:05:38.785 user 0m0.207s 00:05:38.785 sys 0m0.113s 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.785 13:31:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:38.785 ************************************ 00:05:38.785 END TEST accel_missing_filename 00:05:38.785 ************************************ 00:05:38.785 13:31:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.785 13:31:27 accel -- accel/accel.sh@107 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:38.785 13:31:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:38.785 13:31:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.785 13:31:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.785 ************************************ 00:05:38.785 START TEST accel_compress_verify 00:05:38.785 ************************************ 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.785 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@49 -- # local IFS=, 00:05:38.785 13:31:27 accel.accel_compress_verify -- accel/accel.sh@50 -- # jq -r . 00:05:38.785 [2024-07-12 13:31:27.309843] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:38.785 [2024-07-12 13:31:27.309964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430210 ] 00:05:38.785 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.046 [2024-07-12 13:31:27.376616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.046 [2024-07-12 13:31:27.446457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.046 [2024-07-12 13:31:27.476184] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.046 [2024-07-12 13:31:27.513177] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:39.046 00:05:39.046 Compression does not support the verify option, aborting. 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.046 00:05:39.046 real 0m0.277s 00:05:39.046 user 0m0.208s 00:05:39.046 sys 0m0.110s 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.046 13:31:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:39.046 ************************************ 00:05:39.046 END TEST accel_compress_verify 00:05:39.046 ************************************ 00:05:39.046 13:31:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.046 13:31:27 accel -- accel/accel.sh@109 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:39.046 13:31:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:39.046 13:31:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.046 13:31:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.308 ************************************ 00:05:39.308 START TEST accel_wrong_workload 00:05:39.308 ************************************ 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@49 -- # local IFS=, 00:05:39.308 13:31:27 accel.accel_wrong_workload -- accel/accel.sh@50 -- # jq -r . 00:05:39.308 Unsupported workload type: foobar 00:05:39.308 [2024-07-12 13:31:27.656170] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:39.308 accel_perf options: 00:05:39.308 [-h help message] 00:05:39.308 [-q queue depth per core] 00:05:39.308 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.308 [-T number of threads per core 00:05:39.308 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.308 [-t time in seconds] 00:05:39.308 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.308 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:39.308 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.308 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.308 [-S for crc32c workload, use this seed value (default 0) 00:05:39.308 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.308 [-f for fill workload, use this BYTE value (default 255) 00:05:39.308 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.308 [-y verify result if this switch is on] 00:05:39.308 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.308 Can be used to spread operations across a wider range of memory. 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.308 00:05:39.308 real 0m0.026s 00:05:39.308 user 0m0.011s 00:05:39.308 sys 0m0.015s 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.308 13:31:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:39.308 ************************************ 00:05:39.308 END TEST accel_wrong_workload 00:05:39.308 ************************************ 00:05:39.308 Error: writing output failed: Broken pipe 00:05:39.308 13:31:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.308 13:31:27 accel -- accel/accel.sh@111 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.308 13:31:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:39.308 13:31:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.308 13:31:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.308 ************************************ 00:05:39.308 START TEST accel_negative_buffers 00:05:39.308 ************************************ 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.308 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@49 -- # local IFS=, 00:05:39.308 13:31:27 accel.accel_negative_buffers -- accel/accel.sh@50 -- # jq -r . 00:05:39.308 -x option must be non-negative. 00:05:39.308 [2024-07-12 13:31:27.754067] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:39.308 accel_perf options: 00:05:39.308 [-h help message] 00:05:39.308 [-q queue depth per core] 00:05:39.308 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:39.308 [-T number of threads per core 00:05:39.308 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:39.308 [-t time in seconds] 00:05:39.308 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:39.308 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:39.308 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:39.308 [-l for compress/decompress workloads, name of uncompressed input file 00:05:39.308 [-S for crc32c workload, use this seed value (default 0) 00:05:39.309 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:39.309 [-f for fill workload, use this BYTE value (default 255) 00:05:39.309 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:39.309 [-y verify result if this switch is on] 00:05:39.309 [-a tasks to allocate per core (default: same value as -q)] 00:05:39.309 Can be used to spread operations across a wider range of memory. 00:05:39.309 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:39.309 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.309 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.309 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.309 00:05:39.309 real 0m0.026s 00:05:39.309 user 0m0.014s 00:05:39.309 sys 0m0.012s 00:05:39.309 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.309 13:31:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:39.309 ************************************ 00:05:39.309 END TEST accel_negative_buffers 00:05:39.309 ************************************ 00:05:39.309 Error: writing output failed: Broken pipe 00:05:39.309 13:31:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.309 13:31:27 accel -- accel/accel.sh@115 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:39.309 13:31:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:39.309 13:31:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.309 13:31:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.309 ************************************ 00:05:39.309 START TEST accel_crc32c 00:05:39.309 ************************************ 00:05:39.309 13:31:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@49 -- # local IFS=, 00:05:39.309 13:31:27 accel.accel_crc32c -- accel/accel.sh@50 -- # jq -r . 00:05:39.309 [2024-07-12 13:31:27.854928] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:39.309 [2024-07-12 13:31:27.855027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430279 ] 00:05:39.309 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.570 [2024-07-12 13:31:27.920177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.570 [2024-07-12 13:31:27.986507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.570 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.570 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.570 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.570 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.570 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.571 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:40.959 13:31:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.959 00:05:40.959 real 0m1.276s 00:05:40.959 user 0m1.183s 00:05:40.959 sys 0m0.104s 00:05:40.959 13:31:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.959 13:31:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:40.959 ************************************ 00:05:40.959 END TEST accel_crc32c 00:05:40.959 ************************************ 00:05:40.959 13:31:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.959 13:31:29 accel -- accel/accel.sh@116 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:40.959 13:31:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:40.960 13:31:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.960 13:31:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.960 ************************************ 00:05:40.960 START TEST accel_crc32c_C2 00:05:40.960 ************************************ 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@49 -- # local IFS=, 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@50 -- # jq -r . 00:05:40.960 [2024-07-12 13:31:29.205534] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:40.960 [2024-07-12 13:31:29.205622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430627 ] 00:05:40.960 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.960 [2024-07-12 13:31:29.270494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.960 [2024-07-12 13:31:29.338123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.960 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.902 00:05:41.902 real 0m1.277s 00:05:41.902 user 0m1.178s 00:05:41.902 sys 0m0.110s 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.902 13:31:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:41.902 ************************************ 00:05:41.902 END TEST accel_crc32c_C2 00:05:41.902 ************************************ 00:05:42.163 13:31:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.163 13:31:30 accel -- accel/accel.sh@117 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:42.163 13:31:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.163 13:31:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.163 13:31:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.163 ************************************ 00:05:42.163 START TEST accel_copy 00:05:42.163 ************************************ 00:05:42.163 13:31:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@49 -- # local IFS=, 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@50 -- # jq -r . 00:05:42.163 [2024-07-12 13:31:30.554850] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:42.163 [2024-07-12 13:31:30.554946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430977 ] 00:05:42.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.163 [2024-07-12 13:31:30.619460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.163 [2024-07-12 13:31:30.687237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.163 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:43.548 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:43.549 13:31:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.549 00:05:43.549 real 0m1.276s 00:05:43.549 user 0m1.178s 00:05:43.549 sys 0m0.109s 00:05:43.549 13:31:31 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.549 13:31:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 ************************************ 00:05:43.549 END TEST accel_copy 00:05:43.549 ************************************ 00:05:43.549 13:31:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.549 13:31:31 accel -- accel/accel.sh@118 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.549 13:31:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:43.549 13:31:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.549 13:31:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.549 ************************************ 00:05:43.549 START TEST accel_fill 00:05:43.549 ************************************ 00:05:43.549 13:31:31 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@49 -- # local IFS=, 00:05:43.549 13:31:31 accel.accel_fill -- accel/accel.sh@50 -- # jq -r . 00:05:43.549 [2024-07-12 13:31:31.906594] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:43.549 [2024-07-12 13:31:31.906686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431263 ] 00:05:43.549 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.549 [2024-07-12 13:31:31.973196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.549 [2024-07-12 13:31:32.045971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.549 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.932 13:31:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.932 13:31:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.932 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.932 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.932 13:31:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.932 13:31:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.932 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:44.933 13:31:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.933 00:05:44.933 real 0m1.284s 00:05:44.933 user 0m1.180s 00:05:44.933 sys 0m0.116s 00:05:44.933 13:31:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.933 13:31:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 ************************************ 00:05:44.933 END TEST accel_fill 00:05:44.933 ************************************ 00:05:44.933 13:31:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.933 13:31:33 accel -- accel/accel.sh@119 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:44.933 13:31:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.933 13:31:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.933 13:31:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 ************************************ 00:05:44.933 START TEST accel_copy_crc32c 00:05:44.933 ************************************ 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@49 -- # local IFS=, 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@50 -- # jq -r . 00:05:44.933 [2024-07-12 13:31:33.265942] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:44.933 [2024-07-12 13:31:33.266026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431447 ] 00:05:44.933 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.933 [2024-07-12 13:31:33.332706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.933 [2024-07-12 13:31:33.405187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.933 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.319 00:05:46.319 real 0m1.284s 00:05:46.319 user 0m1.187s 00:05:46.319 sys 0m0.109s 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.319 13:31:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:46.319 ************************************ 00:05:46.319 END TEST accel_copy_crc32c 00:05:46.319 ************************************ 00:05:46.319 13:31:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.319 13:31:34 accel -- accel/accel.sh@120 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:46.319 13:31:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:46.319 13:31:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.319 13:31:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.319 ************************************ 00:05:46.319 START TEST accel_copy_crc32c_C2 00:05:46.319 ************************************ 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@49 -- # local IFS=, 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@50 -- # jq -r . 00:05:46.319 [2024-07-12 13:31:34.626052] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:46.319 [2024-07-12 13:31:34.626149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431719 ] 00:05:46.319 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.319 [2024-07-12 13:31:34.694549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.319 [2024-07-12 13:31:34.766902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.319 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.320 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.705 00:05:47.705 real 0m1.286s 00:05:47.705 user 0m1.189s 00:05:47.705 sys 0m0.109s 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.705 13:31:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:47.705 ************************************ 00:05:47.705 END TEST accel_copy_crc32c_C2 00:05:47.705 ************************************ 00:05:47.705 13:31:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.705 13:31:35 accel -- accel/accel.sh@121 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:47.705 13:31:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.705 13:31:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.705 13:31:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.705 ************************************ 00:05:47.705 START TEST accel_dualcast 00:05:47.705 ************************************ 00:05:47.705 13:31:35 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@49 -- # local IFS=, 00:05:47.705 13:31:35 accel.accel_dualcast -- accel/accel.sh@50 -- # jq -r . 00:05:47.705 [2024-07-12 13:31:35.987743] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:47.705 [2024-07-12 13:31:35.987837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432072 ] 00:05:47.706 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.706 [2024-07-12 13:31:36.062989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.706 [2024-07-12 13:31:36.136187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.706 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:49.091 13:31:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.091 00:05:49.091 real 0m1.292s 00:05:49.091 user 0m1.183s 00:05:49.091 sys 0m0.120s 00:05:49.091 13:31:37 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.091 13:31:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:49.091 ************************************ 00:05:49.091 END TEST accel_dualcast 00:05:49.091 ************************************ 00:05:49.091 13:31:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.091 13:31:37 accel -- accel/accel.sh@122 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:49.091 13:31:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:49.091 13:31:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.091 13:31:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.091 ************************************ 00:05:49.091 START TEST accel_compare 00:05:49.091 ************************************ 00:05:49.091 13:31:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:49.091 13:31:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:49.091 13:31:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:49.091 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.091 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.091 13:31:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:49.091 13:31:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:49.091 13:31:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@49 -- # local IFS=, 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@50 -- # jq -r . 00:05:49.092 [2024-07-12 13:31:37.354920] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:49.092 [2024-07-12 13:31:37.355010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432421 ] 00:05:49.092 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.092 [2024-07-12 13:31:37.420363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.092 [2024-07-12 13:31:37.492061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:50.034 13:31:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.034 00:05:50.035 real 0m1.281s 00:05:50.035 user 0m1.173s 00:05:50.035 sys 0m0.118s 00:05:50.035 13:31:38 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.035 13:31:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:50.035 ************************************ 00:05:50.035 END TEST accel_compare 00:05:50.035 ************************************ 00:05:50.295 13:31:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.295 13:31:38 accel -- accel/accel.sh@123 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:50.295 13:31:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.295 13:31:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.295 13:31:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.295 ************************************ 00:05:50.295 START TEST accel_xor 00:05:50.295 ************************************ 00:05:50.296 13:31:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@49 -- # local IFS=, 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@50 -- # jq -r . 00:05:50.296 [2024-07-12 13:31:38.710511] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:50.296 [2024-07-12 13:31:38.710592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432731 ] 00:05:50.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.296 [2024-07-12 13:31:38.775470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.296 [2024-07-12 13:31:38.841774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.555 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.556 13:31:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:51.497 13:31:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.497 00:05:51.497 real 0m1.277s 00:05:51.497 user 0m1.182s 00:05:51.497 sys 0m0.107s 00:05:51.497 13:31:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.497 13:31:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:51.497 ************************************ 00:05:51.497 END TEST accel_xor 00:05:51.497 ************************************ 00:05:51.497 13:31:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.497 13:31:40 accel -- accel/accel.sh@124 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:51.497 13:31:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.497 13:31:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.497 13:31:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.497 ************************************ 00:05:51.497 START TEST accel_xor 00:05:51.497 ************************************ 00:05:51.497 13:31:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@49 -- # local IFS=, 00:05:51.497 13:31:40 accel.accel_xor -- accel/accel.sh@50 -- # jq -r . 00:05:51.497 [2024-07-12 13:31:40.063551] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:51.497 [2024-07-12 13:31:40.063676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432908 ] 00:05:51.758 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.758 [2024-07-12 13:31:40.131559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.758 [2024-07-12 13:31:40.198818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.758 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.759 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:53.142 13:31:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.142 00:05:53.142 real 0m1.282s 00:05:53.142 user 0m1.184s 00:05:53.142 sys 0m0.109s 00:05:53.142 13:31:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.142 13:31:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:53.142 ************************************ 00:05:53.142 END TEST accel_xor 00:05:53.142 ************************************ 00:05:53.142 13:31:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.142 13:31:41 accel -- accel/accel.sh@125 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:53.142 13:31:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:53.142 13:31:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.142 13:31:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.142 ************************************ 00:05:53.142 START TEST accel_dif_verify 00:05:53.142 ************************************ 00:05:53.142 13:31:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@49 -- # local IFS=, 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@50 -- # jq -r . 00:05:53.142 [2024-07-12 13:31:41.418092] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:53.142 [2024-07-12 13:31:41.418201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433160 ] 00:05:53.142 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.142 [2024-07-12 13:31:41.485156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.142 [2024-07-12 13:31:41.557707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.143 13:31:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.527 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.527 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.527 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.527 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.527 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:54.528 13:31:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.528 00:05:54.528 real 0m1.285s 00:05:54.528 user 0m1.187s 00:05:54.528 sys 0m0.110s 00:05:54.528 13:31:42 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.528 13:31:42 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:54.528 ************************************ 00:05:54.528 END TEST accel_dif_verify 00:05:54.528 ************************************ 00:05:54.528 13:31:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.528 13:31:42 accel -- accel/accel.sh@126 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:54.528 13:31:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:54.528 13:31:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.528 13:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.528 ************************************ 00:05:54.528 START TEST accel_dif_generate 00:05:54.528 ************************************ 00:05:54.528 13:31:42 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@49 -- # local IFS=, 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@50 -- # jq -r . 00:05:54.528 [2024-07-12 13:31:42.775845] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:54.528 [2024-07-12 13:31:42.775942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433509 ] 00:05:54.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.528 [2024-07-12 13:31:42.851458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.528 [2024-07-12 13:31:42.925017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.528 13:31:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:55.470 13:31:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.470 00:05:55.470 real 0m1.293s 00:05:55.470 user 0m1.186s 00:05:55.470 sys 0m0.118s 00:05:55.470 13:31:44 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.470 13:31:44 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:55.470 ************************************ 00:05:55.470 END TEST accel_dif_generate 00:05:55.470 ************************************ 00:05:55.730 13:31:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.730 13:31:44 accel -- accel/accel.sh@127 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:55.730 13:31:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:55.731 13:31:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.731 13:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.731 ************************************ 00:05:55.731 START TEST accel_dif_generate_copy 00:05:55.731 ************************************ 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@49 -- # local IFS=, 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@50 -- # jq -r . 00:05:55.731 [2024-07-12 13:31:44.144239] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:55.731 [2024-07-12 13:31:44.144331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433864 ] 00:05:55.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.731 [2024-07-12 13:31:44.210598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.731 [2024-07-12 13:31:44.282821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.731 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.995 13:31:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.023 00:05:57.023 real 0m1.284s 00:05:57.023 user 0m1.186s 00:05:57.023 sys 0m0.110s 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.023 13:31:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:57.023 ************************************ 00:05:57.023 END TEST accel_dif_generate_copy 00:05:57.023 ************************************ 00:05:57.023 13:31:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.023 13:31:45 accel -- accel/accel.sh@129 -- # [[ y == y ]] 00:05:57.023 13:31:45 accel -- accel/accel.sh@130 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.023 13:31:45 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:57.023 13:31:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.023 13:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.023 ************************************ 00:05:57.023 START TEST accel_comp 00:05:57.024 ************************************ 00:05:57.024 13:31:45 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@49 -- # local IFS=, 00:05:57.024 13:31:45 accel.accel_comp -- accel/accel.sh@50 -- # jq -r . 00:05:57.024 [2024-07-12 13:31:45.505158] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:57.024 [2024-07-12 13:31:45.505302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434190 ] 00:05:57.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.024 [2024-07-12 13:31:45.572549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.285 [2024-07-12 13:31:45.641479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.285 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.286 13:31:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:58.226 13:31:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.226 00:05:58.226 real 0m1.285s 00:05:58.226 user 0m1.191s 00:05:58.226 sys 0m0.108s 00:05:58.226 13:31:46 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.226 13:31:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:58.226 ************************************ 00:05:58.226 END TEST accel_comp 00:05:58.226 ************************************ 00:05:58.226 13:31:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.226 13:31:46 accel -- accel/accel.sh@131 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:58.226 13:31:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:58.226 13:31:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.226 13:31:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.488 ************************************ 00:05:58.488 START TEST accel_decomp 00:05:58.488 ************************************ 00:05:58.488 13:31:46 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@49 -- # local IFS=, 00:05:58.488 13:31:46 accel.accel_decomp -- accel/accel.sh@50 -- # jq -r . 00:05:58.488 [2024-07-12 13:31:46.863905] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:58.488 [2024-07-12 13:31:46.864003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434366 ] 00:05:58.488 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.488 [2024-07-12 13:31:46.928693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.488 [2024-07-12 13:31:46.998259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.488 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.489 13:31:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.971 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.972 13:31:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.972 00:05:59.972 real 0m1.281s 00:05:59.972 user 0m1.187s 00:05:59.972 sys 0m0.106s 00:05:59.972 13:31:48 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.972 13:31:48 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:59.972 ************************************ 00:05:59.972 END TEST accel_decomp 00:05:59.972 ************************************ 00:05:59.972 13:31:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.972 13:31:48 accel -- accel/accel.sh@132 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:59.972 13:31:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:59.972 13:31:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.972 13:31:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.972 ************************************ 00:05:59.972 START TEST accel_decomp_full 00:05:59.972 ************************************ 00:05:59.972 13:31:48 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@49 -- # local IFS=, 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@50 -- # jq -r . 00:05:59.972 [2024-07-12 13:31:48.221790] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:05:59.972 [2024-07-12 13:31:48.221879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434605 ] 00:05:59.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.972 [2024-07-12 13:31:48.290906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.972 [2024-07-12 13:31:48.363057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.972 13:31:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.914 13:31:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.175 13:31:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.175 00:06:01.175 real 0m1.298s 00:06:01.175 user 0m1.194s 00:06:01.175 sys 0m0.117s 00:06:01.175 13:31:49 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.175 13:31:49 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:01.175 ************************************ 00:06:01.175 END TEST accel_decomp_full 00:06:01.175 ************************************ 00:06:01.175 13:31:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.175 13:31:49 accel -- accel/accel.sh@133 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:01.175 13:31:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:01.175 13:31:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.175 13:31:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.175 ************************************ 00:06:01.175 START TEST accel_decomp_mcore 00:06:01.175 ************************************ 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@49 -- # local IFS=, 00:06:01.175 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@50 -- # jq -r . 00:06:01.175 [2024-07-12 13:31:49.597300] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:01.175 [2024-07-12 13:31:49.597390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434954 ] 00:06:01.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.176 [2024-07-12 13:31:49.665267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.176 [2024-07-12 13:31:49.739825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.176 [2024-07-12 13:31:49.739942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.176 [2024-07-12 13:31:49.740099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.176 [2024-07-12 13:31:49.740100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.437 13:31:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.379 00:06:02.379 real 0m1.299s 00:06:02.379 user 0m4.426s 00:06:02.379 sys 0m0.114s 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.379 13:31:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:02.379 ************************************ 00:06:02.379 END TEST accel_decomp_mcore 00:06:02.379 ************************************ 00:06:02.379 13:31:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.379 13:31:50 accel -- accel/accel.sh@134 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.379 13:31:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:02.379 13:31:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.379 13:31:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.379 ************************************ 00:06:02.379 START TEST accel_decomp_full_mcore 00:06:02.379 ************************************ 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@49 -- # local IFS=, 00:06:02.379 13:31:50 accel.accel_decomp_full_mcore -- accel/accel.sh@50 -- # jq -r . 00:06:02.640 [2024-07-12 13:31:50.970559] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:02.640 [2024-07-12 13:31:50.970648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435312 ] 00:06:02.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.640 [2024-07-12 13:31:51.037735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.641 [2024-07-12 13:31:51.112725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.641 [2024-07-12 13:31:51.112844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.641 [2024-07-12 13:31:51.113002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.641 [2024-07-12 13:31:51.113002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.641 13:31:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.024 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.025 00:06:04.025 real 0m1.310s 00:06:04.025 user 0m4.469s 00:06:04.025 sys 0m0.119s 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.025 13:31:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:04.025 ************************************ 00:06:04.025 END TEST accel_decomp_full_mcore 00:06:04.025 ************************************ 00:06:04.025 13:31:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.025 13:31:52 accel -- accel/accel.sh@135 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:04.025 13:31:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:04.025 13:31:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.025 13:31:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.025 ************************************ 00:06:04.025 START TEST accel_decomp_mthread 00:06:04.025 ************************************ 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@49 -- # local IFS=, 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@50 -- # jq -r . 00:06:04.025 [2024-07-12 13:31:52.356736] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:04.025 [2024-07-12 13:31:52.356833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435665 ] 00:06:04.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.025 [2024-07-12 13:31:52.422564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.025 [2024-07-12 13:31:52.488412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.025 13:31:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.407 00:06:05.407 real 0m1.282s 00:06:05.407 user 0m1.183s 00:06:05.407 sys 0m0.112s 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.407 13:31:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:05.407 ************************************ 00:06:05.407 END TEST accel_decomp_mthread 00:06:05.407 ************************************ 00:06:05.407 13:31:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.407 13:31:53 accel -- accel/accel.sh@136 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:05.407 13:31:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:05.407 13:31:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.407 13:31:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.407 ************************************ 00:06:05.407 START TEST accel_decomp_full_mthread 00:06:05.407 ************************************ 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@49 -- # local IFS=, 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@50 -- # jq -r . 00:06:05.407 [2024-07-12 13:31:53.713184] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:05.407 [2024-07-12 13:31:53.713291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435851 ] 00:06:05.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.407 [2024-07-12 13:31:53.779206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.407 [2024-07-12 13:31:53.849581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.407 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.408 13:31:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.794 00:06:06.794 real 0m1.316s 00:06:06.794 user 0m1.213s 00:06:06.794 sys 0m0.115s 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.794 13:31:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:06.794 ************************************ 00:06:06.794 END TEST accel_decomp_full_mthread 00:06:06.794 ************************************ 00:06:06.794 13:31:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.794 13:31:55 accel -- accel/accel.sh@138 -- # [[ n == y ]] 00:06:06.794 13:31:55 accel -- accel/accel.sh@150 -- # [[ 0 == 1 ]] 00:06:06.794 13:31:55 accel -- accel/accel.sh@177 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.794 13:31:55 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.794 13:31:55 accel -- accel/accel.sh@177 -- # build_accel_config 00:06:06.794 13:31:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.794 13:31:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.794 13:31:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.794 13:31:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.794 13:31:55 accel -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:06.794 13:31:55 accel -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:06.794 13:31:55 accel -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:06.794 13:31:55 accel -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:06.794 13:31:55 accel -- accel/accel.sh@49 -- # local IFS=, 00:06:06.794 13:31:55 accel -- accel/accel.sh@50 -- # jq -r . 00:06:06.794 ************************************ 00:06:06.795 START TEST accel_dif_functional_tests 00:06:06.795 ************************************ 00:06:06.795 13:31:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.795 [2024-07-12 13:31:55.105997] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:06.795 [2024-07-12 13:31:55.106092] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436070 ] 00:06:06.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.795 [2024-07-12 13:31:55.170998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.795 [2024-07-12 13:31:55.243047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.795 [2024-07-12 13:31:55.243162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.795 [2024-07-12 13:31:55.243164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.795 00:06:06.795 00:06:06.795 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.795 http://cunit.sourceforge.net/ 00:06:06.795 00:06:06.795 00:06:06.795 Suite: accel_dif 00:06:06.795 Test: verify: DIF generated, GUARD check ...passed 00:06:06.795 Test: verify: DIF generated, APPTAG check ...passed 00:06:06.795 Test: verify: DIF generated, REFTAG check ...passed 00:06:06.795 Test: verify: DIF not generated, GUARD check ...[2024-07-12 13:31:55.297598] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.795 passed 00:06:06.795 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 13:31:55.297646] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.795 passed 00:06:06.795 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 13:31:55.297669] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.795 passed 00:06:06.795 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:06.795 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 13:31:55.297715] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:06.795 passed 00:06:06.795 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:06.795 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:06.795 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:06.795 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 13:31:55.297809] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:06.795 passed 00:06:06.795 Test: verify copy: DIF generated, GUARD check ...passed 00:06:06.795 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:06.795 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:06.795 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 13:31:55.297923] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.795 passed 00:06:06.795 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 13:31:55.297947] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.795 passed 00:06:06.795 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 13:31:55.297969] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.795 passed 00:06:06.795 Test: generate copy: DIF generated, GUARD check ...passed 00:06:06.795 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:06.795 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:06.795 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:06.795 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:06.795 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:06.795 Test: generate copy: iovecs-len validate ...[2024-07-12 13:31:55.298139] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:06.795 passed 00:06:06.795 Test: generate copy: buffer alignment validate ...passed 00:06:06.795 00:06:06.795 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.795 suites 1 1 n/a 0 0 00:06:06.795 tests 26 26 26 0 0 00:06:06.795 asserts 115 115 115 0 n/a 00:06:06.795 00:06:06.795 Elapsed time = 0.000 seconds 00:06:07.055 00:06:07.055 real 0m0.324s 00:06:07.055 user 0m0.460s 00:06:07.055 sys 0m0.127s 00:06:07.055 13:31:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.055 13:31:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:07.055 ************************************ 00:06:07.055 END TEST accel_dif_functional_tests 00:06:07.055 ************************************ 00:06:07.055 13:31:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.055 13:31:55 accel -- accel/accel.sh@178 -- # export PCI_ALLOWED= 00:06:07.055 13:31:55 accel -- accel/accel.sh@178 -- # PCI_ALLOWED= 00:06:07.055 00:06:07.055 real 0m29.857s 00:06:07.055 user 0m33.342s 00:06:07.055 sys 0m4.244s 00:06:07.055 13:31:55 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.055 13:31:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.055 ************************************ 00:06:07.055 END TEST accel 00:06:07.055 ************************************ 00:06:07.055 13:31:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.055 13:31:55 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:07.055 13:31:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.055 13:31:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.055 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:06:07.055 ************************************ 00:06:07.055 START TEST accel_rpc 00:06:07.055 ************************************ 00:06:07.055 13:31:55 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:07.055 * Looking for test storage... 00:06:07.055 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:07.055 13:31:55 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.055 13:31:55 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2436372 00:06:07.055 13:31:55 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2436372 00:06:07.055 13:31:55 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:07.055 13:31:55 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2436372 ']' 00:06:07.055 13:31:55 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.055 13:31:55 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.055 13:31:55 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.055 13:31:55 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.055 13:31:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.315 [2024-07-12 13:31:55.645907] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:07.315 [2024-07-12 13:31:55.645976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436372 ] 00:06:07.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.315 [2024-07-12 13:31:55.708100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.315 [2024-07-12 13:31:55.774653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.885 13:31:56 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.885 13:31:56 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:07.885 13:31:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:07.885 13:31:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:07.885 13:31:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:07.885 13:31:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:07.885 13:31:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:07.885 13:31:56 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.885 13:31:56 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.885 13:31:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.885 ************************************ 00:06:07.885 START TEST accel_assign_opcode 00:06:07.885 ************************************ 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.885 [2024-07-12 13:31:56.460613] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.885 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.146 [2024-07-12 13:31:56.472630] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.146 software 00:06:08.146 00:06:08.146 real 0m0.213s 00:06:08.146 user 0m0.049s 00:06:08.146 sys 0m0.011s 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.146 13:31:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.146 ************************************ 00:06:08.146 END TEST accel_assign_opcode 00:06:08.146 ************************************ 00:06:08.146 13:31:56 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:08.146 13:31:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2436372 00:06:08.146 13:31:56 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2436372 ']' 00:06:08.146 13:31:56 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2436372 00:06:08.146 13:31:56 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:08.146 13:31:56 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.146 13:31:56 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436372 00:06:08.406 13:31:56 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.406 13:31:56 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.406 13:31:56 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436372' 00:06:08.406 killing process with pid 2436372 00:06:08.406 13:31:56 accel_rpc -- common/autotest_common.sh@967 -- # kill 2436372 00:06:08.406 13:31:56 accel_rpc -- common/autotest_common.sh@972 -- # wait 2436372 00:06:08.406 00:06:08.406 real 0m1.440s 00:06:08.406 user 0m1.532s 00:06:08.406 sys 0m0.370s 00:06:08.406 13:31:56 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.406 13:31:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.406 ************************************ 00:06:08.406 END TEST accel_rpc 00:06:08.406 ************************************ 00:06:08.668 13:31:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.668 13:31:57 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:08.668 13:31:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.668 13:31:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.668 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.668 ************************************ 00:06:08.668 START TEST app_cmdline 00:06:08.668 ************************************ 00:06:08.668 13:31:57 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:08.668 * Looking for test storage... 00:06:08.668 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:08.668 13:31:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:08.668 13:31:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2436674 00:06:08.668 13:31:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2436674 00:06:08.668 13:31:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:08.668 13:31:57 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2436674 ']' 00:06:08.668 13:31:57 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.668 13:31:57 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.668 13:31:57 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.669 13:31:57 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.669 13:31:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.669 [2024-07-12 13:31:57.162478] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:08.669 [2024-07-12 13:31:57.162557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436674 ] 00:06:08.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.669 [2024-07-12 13:31:57.230379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.930 [2024-07-12 13:31:57.307135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.501 13:31:57 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.501 13:31:57 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:09.501 13:31:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:09.761 { 00:06:09.761 "version": "SPDK v24.09-pre git sha1 a49cd26ae", 00:06:09.761 "fields": { 00:06:09.761 "major": 24, 00:06:09.761 "minor": 9, 00:06:09.761 "patch": 0, 00:06:09.761 "suffix": "-pre", 00:06:09.761 "commit": "a49cd26ae" 00:06:09.761 } 00:06:09.761 } 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:09.761 13:31:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.761 13:31:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.762 request: 00:06:09.762 { 00:06:09.762 "method": "env_dpdk_get_mem_stats", 00:06:09.762 "req_id": 1 00:06:09.762 } 00:06:09.762 Got JSON-RPC error response 00:06:09.762 response: 00:06:09.762 { 00:06:09.762 "code": -32601, 00:06:09.762 "message": "Method not found" 00:06:09.762 } 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.762 13:31:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2436674 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2436674 ']' 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2436674 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.762 13:31:58 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2436674 00:06:10.022 13:31:58 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.022 13:31:58 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.022 13:31:58 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2436674' 00:06:10.022 killing process with pid 2436674 00:06:10.022 13:31:58 app_cmdline -- common/autotest_common.sh@967 -- # kill 2436674 00:06:10.022 13:31:58 app_cmdline -- common/autotest_common.sh@972 -- # wait 2436674 00:06:10.022 00:06:10.022 real 0m1.550s 00:06:10.022 user 0m1.842s 00:06:10.022 sys 0m0.424s 00:06:10.022 13:31:58 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.022 13:31:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.022 ************************************ 00:06:10.022 END TEST app_cmdline 00:06:10.022 ************************************ 00:06:10.283 13:31:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.283 13:31:58 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:10.283 13:31:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.283 13:31:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.283 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:10.283 ************************************ 00:06:10.283 START TEST version 00:06:10.283 ************************************ 00:06:10.283 13:31:58 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:10.283 * Looking for test storage... 00:06:10.283 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:10.283 13:31:58 version -- app/version.sh@17 -- # get_header_version major 00:06:10.283 13:31:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # cut -f2 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.283 13:31:58 version -- app/version.sh@17 -- # major=24 00:06:10.283 13:31:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:10.283 13:31:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # cut -f2 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.283 13:31:58 version -- app/version.sh@18 -- # minor=9 00:06:10.283 13:31:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:10.283 13:31:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # cut -f2 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.283 13:31:58 version -- app/version.sh@19 -- # patch=0 00:06:10.283 13:31:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:10.283 13:31:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # cut -f2 00:06:10.283 13:31:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.283 13:31:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:10.283 13:31:58 version -- app/version.sh@22 -- # version=24.9 00:06:10.283 13:31:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:10.283 13:31:58 version -- app/version.sh@28 -- # version=24.9rc0 00:06:10.283 13:31:58 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:10.283 13:31:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:10.283 13:31:58 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:10.283 13:31:58 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:10.283 00:06:10.283 real 0m0.179s 00:06:10.283 user 0m0.087s 00:06:10.283 sys 0m0.131s 00:06:10.283 13:31:58 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.283 13:31:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:10.283 ************************************ 00:06:10.283 END TEST version 00:06:10.283 ************************************ 00:06:10.544 13:31:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.544 13:31:58 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@198 -- # uname -s 00:06:10.544 13:31:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:10.544 13:31:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:10.544 13:31:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:10.544 13:31:58 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:10.544 13:31:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.544 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:10.544 13:31:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:10.544 13:31:58 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:10.544 13:31:58 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:10.544 13:31:58 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:10.544 13:31:58 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:10.544 13:31:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.544 13:31:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.544 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:10.544 ************************************ 00:06:10.544 START TEST llvm_fuzz 00:06:10.544 ************************************ 00:06:10.544 13:31:58 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:10.544 * Looking for test storage... 00:06:10.544 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:10.544 13:31:59 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:10.544 13:31:59 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:10.544 13:31:59 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:10.544 13:31:59 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:10.545 13:31:59 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:10.545 13:31:59 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:10.545 13:31:59 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:10.545 13:31:59 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:10.545 13:31:59 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:10.545 13:31:59 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.545 13:31:59 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.545 13:31:59 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:10.545 ************************************ 00:06:10.545 START TEST nvmf_llvm_fuzz 00:06:10.545 ************************************ 00:06:10.545 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:10.809 * Looking for test storage... 00:06:10.809 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:10.809 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:10.809 #define SPDK_CONFIG_H 00:06:10.809 #define SPDK_CONFIG_APPS 1 00:06:10.809 #define SPDK_CONFIG_ARCH native 00:06:10.809 #undef SPDK_CONFIG_ASAN 00:06:10.809 #undef SPDK_CONFIG_AVAHI 00:06:10.809 #undef SPDK_CONFIG_CET 00:06:10.809 #define SPDK_CONFIG_COVERAGE 1 00:06:10.809 #define SPDK_CONFIG_CROSS_PREFIX 00:06:10.809 #undef SPDK_CONFIG_CRYPTO 00:06:10.809 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:10.809 #undef SPDK_CONFIG_CUSTOMOCF 00:06:10.810 #undef SPDK_CONFIG_DAOS 00:06:10.810 #define SPDK_CONFIG_DAOS_DIR 00:06:10.810 #define SPDK_CONFIG_DEBUG 1 00:06:10.810 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:10.810 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:10.810 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:10.810 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:10.810 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:10.810 #undef SPDK_CONFIG_DPDK_UADK 00:06:10.810 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:10.810 #define SPDK_CONFIG_EXAMPLES 1 00:06:10.810 #undef SPDK_CONFIG_FC 00:06:10.810 #define SPDK_CONFIG_FC_PATH 00:06:10.810 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:10.810 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:10.810 #undef SPDK_CONFIG_FUSE 00:06:10.810 #define SPDK_CONFIG_FUZZER 1 00:06:10.810 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:10.810 #undef SPDK_CONFIG_GOLANG 00:06:10.810 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:10.810 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:10.810 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:10.810 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:10.810 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:10.810 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:10.810 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:10.810 #define SPDK_CONFIG_IDXD 1 00:06:10.810 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:10.810 #undef SPDK_CONFIG_IPSEC_MB 00:06:10.810 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:10.810 #define SPDK_CONFIG_ISAL 1 00:06:10.810 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:10.810 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:10.810 #define SPDK_CONFIG_LIBDIR 00:06:10.810 #undef SPDK_CONFIG_LTO 00:06:10.810 #define SPDK_CONFIG_MAX_LCORES 128 00:06:10.810 #define SPDK_CONFIG_NVME_CUSE 1 00:06:10.810 #undef SPDK_CONFIG_OCF 00:06:10.810 #define SPDK_CONFIG_OCF_PATH 00:06:10.810 #define SPDK_CONFIG_OPENSSL_PATH 00:06:10.810 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:10.810 #define SPDK_CONFIG_PGO_DIR 00:06:10.810 #undef SPDK_CONFIG_PGO_USE 00:06:10.810 #define SPDK_CONFIG_PREFIX /usr/local 00:06:10.810 #undef SPDK_CONFIG_RAID5F 00:06:10.810 #undef SPDK_CONFIG_RBD 00:06:10.810 #define SPDK_CONFIG_RDMA 1 00:06:10.810 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:10.810 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:10.810 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:10.810 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:10.810 #undef SPDK_CONFIG_SHARED 00:06:10.810 #undef SPDK_CONFIG_SMA 00:06:10.810 #define SPDK_CONFIG_TESTS 1 00:06:10.810 #undef SPDK_CONFIG_TSAN 00:06:10.810 #define SPDK_CONFIG_UBLK 1 00:06:10.810 #define SPDK_CONFIG_UBSAN 1 00:06:10.810 #undef SPDK_CONFIG_UNIT_TESTS 00:06:10.810 #undef SPDK_CONFIG_URING 00:06:10.810 #define SPDK_CONFIG_URING_PATH 00:06:10.810 #undef SPDK_CONFIG_URING_ZNS 00:06:10.810 #undef SPDK_CONFIG_USDT 00:06:10.810 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:10.810 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:10.810 #define SPDK_CONFIG_VFIO_USER 1 00:06:10.810 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:10.810 #define SPDK_CONFIG_VHOST 1 00:06:10.810 #define SPDK_CONFIG_VIRTIO 1 00:06:10.810 #undef SPDK_CONFIG_VTUNE 00:06:10.810 #define SPDK_CONFIG_VTUNE_DIR 00:06:10.810 #define SPDK_CONFIG_WERROR 1 00:06:10.810 #define SPDK_CONFIG_WPDK_DIR 00:06:10.810 #undef SPDK_CONFIG_XNVME 00:06:10.810 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:10.810 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:10.811 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2437330 ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 2437330 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # set_test_storage 2147483648 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.T8mWpt 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.T8mWpt/tests/nvmf /tmp/spdk.T8mWpt 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=121094852608 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=8276127744 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=25867657216 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6541312 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=64684015616 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1474560 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:10.812 * Looking for test storage... 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=121094852608 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10490720256 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.812 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.812 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1709 -- # set -o errtrace 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # shopt -s extdebug 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1713 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1714 -- # true 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1716 -- # xtrace_fd 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:10.813 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:11.074 13:31:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:11.074 [2024-07-12 13:31:59.440212] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:11.074 [2024-07-12 13:31:59.440325] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437386 ] 00:06:11.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.074 [2024-07-12 13:31:59.629257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.335 [2024-07-12 13:31:59.688616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.336 [2024-07-12 13:31:59.751114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.336 [2024-07-12 13:31:59.767482] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:11.336 INFO: Running with entropic power schedule (0xFF, 100). 00:06:11.336 INFO: Seed: 2717619816 00:06:11.336 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:11.336 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:11.336 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:11.336 INFO: A corpus is not provided, starting from an empty corpus 00:06:11.336 #2 INITED exec/s: 0 rss: 65Mb 00:06:11.336 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:11.336 This may also happen if the target rejected all inputs we tried so far 00:06:11.336 [2024-07-12 13:31:59.822512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.336 [2024-07-12 13:31:59.822540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.597 NEW_FUNC[1/694]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:11.597 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:11.597 #8 NEW cov: 11825 ft: 11824 corp: 2/107b lim: 320 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 InsertRepeatedBytes- 00:06:11.597 [2024-07-12 13:31:59.993716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.597 [2024-07-12 13:31:59.993795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.597 [2024-07-12 13:31:59.993888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.597 [2024-07-12 13:31:59.993916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.597 NEW_FUNC[1/2]: 0x138e890 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2047 00:06:11.597 NEW_FUNC[2/2]: 0x17bf890 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:06:11.597 #12 NEW cov: 12013 ft: 12708 corp: 3/267b lim: 320 exec/s: 0 rss: 72Mb L: 160/160 MS: 4 InsertRepeatedBytes-ChangeByte-CMP-CrossOver- DE: "\000'\032\360x\030\346\353"- 00:06:11.597 [2024-07-12 13:32:00.063552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.597 [2024-07-12 13:32:00.063581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.597 [2024-07-12 13:32:00.063621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.597 [2024-07-12 13:32:00.063632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.597 #13 NEW cov: 12019 ft: 12973 corp: 4/427b lim: 320 exec/s: 0 rss: 72Mb L: 160/160 MS: 1 CopyPart- 00:06:11.597 [2024-07-12 13:32:00.123716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.597 [2024-07-12 13:32:00.123742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.597 [2024-07-12 13:32:00.123783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a35 cdw10:00000000 cdw11:00000000 00:06:11.597 [2024-07-12 13:32:00.123794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.597 #19 NEW cov: 12104 ft: 13223 corp: 5/587b lim: 320 exec/s: 0 rss: 72Mb L: 160/160 MS: 1 CMP- DE: "\337J5\212\360\032'\000"- 00:06:11.597 [2024-07-12 13:32:00.173837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.597 [2024-07-12 13:32:00.173862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.597 [2024-07-12 13:32:00.173903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.597 [2024-07-12 13:32:00.173917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.859 #20 NEW cov: 12104 ft: 13312 corp: 6/747b lim: 320 exec/s: 0 rss: 72Mb L: 160/160 MS: 1 ChangeBit- 00:06:11.859 [2024-07-12 13:32:00.213901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.859 [2024-07-12 13:32:00.213925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.859 [2024-07-12 13:32:00.213966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a30 cdw10:00000000 cdw11:00000000 00:06:11.859 [2024-07-12 13:32:00.213976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.859 #21 NEW cov: 12104 ft: 13396 corp: 7/907b lim: 320 exec/s: 0 rss: 72Mb L: 160/160 MS: 1 ChangeASCIIInt- 00:06:11.859 [2024-07-12 13:32:00.274293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.859 [2024-07-12 13:32:00.274318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.859 [2024-07-12 13:32:00.274360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a35 cdw10:00000000 cdw11:00000000 00:06:11.859 [2024-07-12 13:32:00.274370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.859 [2024-07-12 13:32:00.274409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.859 [2024-07-12 13:32:00.274419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.859 [2024-07-12 13:32:00.274458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:ffff cdw10:ffffffff cdw11:2700ffff 00:06:11.859 [2024-07-12 13:32:00.274468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.859 #22 NEW cov: 12104 ft: 13681 corp: 8/1163b lim: 320 exec/s: 0 rss: 72Mb L: 256/256 MS: 1 InsertRepeatedBytes- 00:06:11.859 [2024-07-12 13:32:00.324180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.859 [2024-07-12 13:32:00.324204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.859 [2024-07-12 13:32:00.324259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.859 [2024-07-12 13:32:00.324270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.859 #23 NEW cov: 12104 ft: 13771 corp: 9/1323b lim: 320 exec/s: 0 rss: 72Mb L: 160/256 MS: 1 CrossOver- 00:06:11.859 [2024-07-12 13:32:00.384354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.859 [2024-07-12 13:32:00.384378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.859 [2024-07-12 13:32:00.384419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.859 [2024-07-12 13:32:00.384429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.859 #24 NEW cov: 12104 ft: 13809 corp: 10/1487b lim: 320 exec/s: 0 rss: 72Mb L: 164/256 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:11.859 [2024-07-12 13:32:00.424459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xebe61878 00:06:11.859 [2024-07-12 13:32:00.424484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.859 [2024-07-12 13:32:00.424526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.859 [2024-07-12 13:32:00.424536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.121 #25 NEW cov: 12104 ft: 13851 corp: 11/1647b lim: 320 exec/s: 0 rss: 72Mb L: 160/256 MS: 1 PersAutoDict- DE: "\000'\032\360x\030\346\353"- 00:06:12.121 [2024-07-12 13:32:00.464588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.121 [2024-07-12 13:32:00.464613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.121 [2024-07-12 13:32:00.464654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a30 cdw10:00000000 cdw11:00000000 00:06:12.121 [2024-07-12 13:32:00.464664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.121 #26 NEW cov: 12104 ft: 13924 corp: 12/1808b lim: 320 exec/s: 0 rss: 72Mb L: 161/256 MS: 1 InsertByte- 00:06:12.121 [2024-07-12 13:32:00.524657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.121 [2024-07-12 13:32:00.524682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.121 #27 NEW cov: 12104 ft: 13933 corp: 13/1902b lim: 320 exec/s: 0 rss: 72Mb L: 94/256 MS: 1 EraseBytes- 00:06:12.121 [2024-07-12 13:32:00.565053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.121 [2024-07-12 13:32:00.565077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.121 [2024-07-12 13:32:00.565122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a35 cdw10:00000000 cdw11:00000000 00:06:12.121 [2024-07-12 13:32:00.565132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.121 [2024-07-12 13:32:00.565174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.121 [2024-07-12 13:32:00.565184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.121 [2024-07-12 13:32:00.565223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:ffff cdw10:ffffffff cdw11:2700ffff 00:06:12.121 [2024-07-12 13:32:00.565238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.121 #28 NEW cov: 12104 ft: 13986 corp: 14/2158b lim: 320 exec/s: 0 rss: 72Mb L: 256/256 MS: 1 ChangeBit- 00:06:12.121 [2024-07-12 13:32:00.625000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:00d5d5d5 00:06:12.121 [2024-07-12 13:32:00.625025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.121 [2024-07-12 13:32:00.625067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.121 [2024-07-12 13:32:00.625077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.121 #29 NEW cov: 12104 ft: 14094 corp: 15/2297b lim: 320 exec/s: 0 rss: 73Mb L: 139/256 MS: 1 InsertRepeatedBytes- 00:06:12.121 [2024-07-12 13:32:00.685269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff0000 00:06:12.121 [2024-07-12 13:32:00.685294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.121 [2024-07-12 13:32:00.685338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:ff000000 cdw10:00000000 cdw11:00000000 00:06:12.121 [2024-07-12 13:32:00.685348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.121 [2024-07-12 13:32:00.685391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.121 [2024-07-12 13:32:00.685401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.383 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:12.383 #30 NEW cov: 12127 ft: 14218 corp: 16/2492b lim: 320 exec/s: 0 rss: 73Mb L: 195/256 MS: 1 CrossOver- 00:06:12.383 [2024-07-12 13:32:00.735497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.383 [2024-07-12 13:32:00.735525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.735566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:06:12.383 [2024-07-12 13:32:00.735578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.735624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.383 [2024-07-12 13:32:00.735635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.735677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:df000000 cdw11:f08a304a 00:06:12.383 [2024-07-12 13:32:00.735687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.383 #31 NEW cov: 12127 ft: 14260 corp: 17/2767b lim: 320 exec/s: 0 rss: 73Mb L: 275/275 MS: 1 CrossOver- 00:06:12.383 [2024-07-12 13:32:00.795456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3b) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:00d5d5d5 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd5d5d5d5d5d5d5d5 00:06:12.383 [2024-07-12 13:32:00.795482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.795522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.383 [2024-07-12 13:32:00.795533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.383 #32 NEW cov: 12127 ft: 14280 corp: 18/2906b lim: 320 exec/s: 32 rss: 73Mb L: 139/275 MS: 1 ChangeByte- 00:06:12.383 [2024-07-12 13:32:00.855630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xebe61878 00:06:12.383 [2024-07-12 13:32:00.855656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.855699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.383 [2024-07-12 13:32:00.855710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.383 #33 NEW cov: 12127 ft: 14297 corp: 19/3066b lim: 320 exec/s: 33 rss: 73Mb L: 160/275 MS: 1 ChangeBit- 00:06:12.383 [2024-07-12 13:32:00.915797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.383 [2024-07-12 13:32:00.915823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.915865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a30 cdw10:00000000 cdw11:00000000 00:06:12.383 [2024-07-12 13:32:00.915875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.383 #34 NEW cov: 12127 ft: 14305 corp: 20/3226b lim: 320 exec/s: 34 rss: 73Mb L: 160/275 MS: 1 CopyPart- 00:06:12.383 [2024-07-12 13:32:00.956036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00ea0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.383 [2024-07-12 13:32:00.956060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.956102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:06:12.383 [2024-07-12 13:32:00.956113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.956162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.383 [2024-07-12 13:32:00.956172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.383 [2024-07-12 13:32:00.956212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:df000000 cdw11:f08a304a 00:06:12.383 [2024-07-12 13:32:00.956223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.645 #35 NEW cov: 12127 ft: 14339 corp: 21/3501b lim: 320 exec/s: 35 rss: 73Mb L: 275/275 MS: 1 ChangeByte- 00:06:12.645 [2024-07-12 13:32:01.016105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.645 [2024-07-12 13:32:01.016130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.645 [2024-07-12 13:32:01.016171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.645 [2024-07-12 13:32:01.016182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.645 [2024-07-12 13:32:01.016237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffff00 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:12.645 [2024-07-12 13:32:01.016249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.645 #36 NEW cov: 12127 ft: 14365 corp: 22/3720b lim: 320 exec/s: 36 rss: 73Mb L: 219/275 MS: 1 InsertRepeatedBytes- 00:06:12.645 [2024-07-12 13:32:01.076193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x18000000000000 00:06:12.645 [2024-07-12 13:32:01.076218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.645 [2024-07-12 13:32:01.076275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.645 [2024-07-12 13:32:01.076287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.645 #37 NEW cov: 12127 ft: 14436 corp: 23/3881b lim: 320 exec/s: 37 rss: 73Mb L: 161/275 MS: 1 InsertByte- 00:06:12.645 [2024-07-12 13:32:01.136356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x18000000000000 00:06:12.645 [2024-07-12 13:32:01.136380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.645 [2024-07-12 13:32:01.136429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.645 [2024-07-12 13:32:01.136440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.645 #38 NEW cov: 12127 ft: 14443 corp: 24/4042b lim: 320 exec/s: 38 rss: 73Mb L: 161/275 MS: 1 ShuffleBytes- 00:06:12.645 [2024-07-12 13:32:01.196648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.645 [2024-07-12 13:32:01.196672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.645 [2024-07-12 13:32:01.196712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.645 [2024-07-12 13:32:01.196722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.645 [2024-07-12 13:32:01.196770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffff00 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:12.645 [2024-07-12 13:32:01.196782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.905 #39 NEW cov: 12127 ft: 14450 corp: 25/4261b lim: 320 exec/s: 39 rss: 73Mb L: 219/275 MS: 1 ShuffleBytes- 00:06:12.905 [2024-07-12 13:32:01.256944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00ea0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.905 [2024-07-12 13:32:01.256969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.905 [2024-07-12 13:32:01.257011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:06:12.905 [2024-07-12 13:32:01.257021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.905 [2024-07-12 13:32:01.257063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.905 [2024-07-12 13:32:01.257073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.905 [2024-07-12 13:32:01.257112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:df000000 cdw11:f08a304a 00:06:12.905 [2024-07-12 13:32:01.257122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.905 NEW_FUNC[1/1]: 0x17c03f0 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:12.905 #40 NEW cov: 12141 ft: 14773 corp: 26/4536b lim: 320 exec/s: 40 rss: 73Mb L: 275/275 MS: 1 ChangeBinInt- 00:06:12.906 [2024-07-12 13:32:01.317036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:78f01a27 cdw11:00ebe618 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.906 [2024-07-12 13:32:01.317061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.906 [2024-07-12 13:32:01.317105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.906 [2024-07-12 13:32:01.317115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.906 [2024-07-12 13:32:01.317155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:ffffffeb cdw11:ffffffff 00:06:12.906 [2024-07-12 13:32:01.317165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.906 [2024-07-12 13:32:01.317211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:7 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.906 [2024-07-12 13:32:01.317222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.906 #41 NEW cov: 12141 ft: 14779 corp: 27/4838b lim: 320 exec/s: 41 rss: 73Mb L: 302/302 MS: 1 CrossOver- 00:06:12.906 [2024-07-12 13:32:01.366917] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x202020202020202 00:06:12.906 [2024-07-12 13:32:01.367123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.906 [2024-07-12 13:32:01.367146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.906 [2024-07-12 13:32:01.367187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a30 cdw10:02020202 cdw11:02020202 00:06:12.906 [2024-07-12 13:32:01.367198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.906 [2024-07-12 13:32:01.367243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:2020202 cdw10:02020202 cdw11:02020202 SGL TRANSPORT DATA BLOCK TRANSPORT 0x202020202020202 00:06:12.906 [2024-07-12 13:32:01.367254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.906 NEW_FUNC[1/1]: 0x117a190 in nvmf_ctrlr_get_log_page /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2616 00:06:12.906 #42 NEW cov: 12171 ft: 14816 corp: 28/5086b lim: 320 exec/s: 42 rss: 73Mb L: 248/302 MS: 1 InsertRepeatedBytes- 00:06:12.906 [2024-07-12 13:32:01.417112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.906 [2024-07-12 13:32:01.417139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.906 [2024-07-12 13:32:01.417181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:1af08a35 cdw10:00000000 cdw11:00000000 00:06:12.906 [2024-07-12 13:32:01.417191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.906 #43 NEW cov: 12171 ft: 14830 corp: 29/5246b lim: 320 exec/s: 43 rss: 73Mb L: 160/302 MS: 1 ShuffleBytes- 00:06:12.906 [2024-07-12 13:32:01.457235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.906 [2024-07-12 13:32:01.457260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.906 [2024-07-12 13:32:01.457302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000ffff 00:06:12.906 [2024-07-12 13:32:01.457312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.906 #44 NEW cov: 12171 ft: 14852 corp: 30/5414b lim: 320 exec/s: 44 rss: 73Mb L: 168/302 MS: 1 PersAutoDict- DE: "\337J5\212\360\032'\000"- 00:06:13.167 [2024-07-12 13:32:01.497329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x271af08a 00:06:13.167 [2024-07-12 13:32:01.497358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.497400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.167 [2024-07-12 13:32:01.497410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.167 #45 NEW cov: 12171 ft: 14883 corp: 31/5578b lim: 320 exec/s: 45 rss: 73Mb L: 164/302 MS: 1 CrossOver- 00:06:13.167 [2024-07-12 13:32:01.557578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff0000 00:06:13.167 [2024-07-12 13:32:01.557603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.557647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0000 00:06:13.167 [2024-07-12 13:32:01.557657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.557698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.167 [2024-07-12 13:32:01.557708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.167 #46 NEW cov: 12171 ft: 14886 corp: 32/5798b lim: 320 exec/s: 46 rss: 73Mb L: 220/302 MS: 1 CrossOver- 00:06:13.167 [2024-07-12 13:32:01.607627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.167 [2024-07-12 13:32:01.607652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.607692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.167 [2024-07-12 13:32:01.607703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.167 #47 NEW cov: 12171 ft: 14919 corp: 33/5959b lim: 320 exec/s: 47 rss: 73Mb L: 161/302 MS: 1 InsertByte- 00:06:13.167 [2024-07-12 13:32:01.647936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:50000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.167 [2024-07-12 13:32:01.647960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.648007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:5 nsid:50505050 cdw10:50505050 cdw11:50505050 00:06:13.167 [2024-07-12 13:32:01.648017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.648062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (50) qid:0 cid:6 nsid:50505050 cdw10:271af08a cdw11:00000000 00:06:13.167 [2024-07-12 13:32:01.648072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.648112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:ff000000 cdw10:ffffffff cdw11:ffffffff 00:06:13.167 [2024-07-12 13:32:01.648122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.167 #48 NEW cov: 12172 ft: 14926 corp: 34/6218b lim: 320 exec/s: 48 rss: 73Mb L: 259/302 MS: 1 InsertRepeatedBytes- 00:06:13.167 [2024-07-12 13:32:01.707969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.167 [2024-07-12 13:32:01.707999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.708041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.167 [2024-07-12 13:32:01.708052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.167 [2024-07-12 13:32:01.708100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffff00 cdw10:ffffffff cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:13.167 [2024-07-12 13:32:01.708110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.429 #49 NEW cov: 12172 ft: 14930 corp: 35/6446b lim: 320 exec/s: 49 rss: 74Mb L: 228/302 MS: 1 CopyPart- 00:06:13.429 [2024-07-12 13:32:01.768204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00ea0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.429 [2024-07-12 13:32:01.768228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.429 [2024-07-12 13:32:01.768278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:06:13.429 [2024-07-12 13:32:01.768289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.429 [2024-07-12 13:32:01.768339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.429 [2024-07-12 13:32:01.768349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.429 [2024-07-12 13:32:01.768388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:8a304adf 00:06:13.429 [2024-07-12 13:32:01.768398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.429 #50 NEW cov: 12172 ft: 14955 corp: 36/6722b lim: 320 exec/s: 50 rss: 74Mb L: 276/302 MS: 1 InsertByte- 00:06:13.429 [2024-07-12 13:32:01.818342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff0000 00:06:13.429 [2024-07-12 13:32:01.818366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.429 [2024-07-12 13:32:01.818406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:06:13.429 [2024-07-12 13:32:01.818418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.429 [2024-07-12 13:32:01.818465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:8a304adf cdw11:00271af0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.429 [2024-07-12 13:32:01.818476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.429 [2024-07-12 13:32:01.818517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:13.429 [2024-07-12 13:32:01.818528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.430 #51 NEW cov: 12172 ft: 14969 corp: 37/7018b lim: 320 exec/s: 25 rss: 74Mb L: 296/302 MS: 1 InsertRepeatedBytes- 00:06:13.430 #51 DONE cov: 12172 ft: 14969 corp: 37/7018b lim: 320 exec/s: 25 rss: 74Mb 00:06:13.430 ###### Recommended dictionary. ###### 00:06:13.430 "\000'\032\360x\030\346\353" # Uses: 1 00:06:13.430 "\337J5\212\360\032'\000" # Uses: 1 00:06:13.430 "\000\000\000\000" # Uses: 0 00:06:13.430 ###### End of recommended dictionary. ###### 00:06:13.430 Done 51 runs in 2 second(s) 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:13.430 13:32:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:13.430 [2024-07-12 13:32:01.995731] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:13.430 [2024-07-12 13:32:01.995808] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437750 ] 00:06:13.692 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.692 [2024-07-12 13:32:02.159930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.692 [2024-07-12 13:32:02.213043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.692 [2024-07-12 13:32:02.274725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.953 [2024-07-12 13:32:02.291064] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:13.953 INFO: Running with entropic power schedule (0xFF, 100). 00:06:13.953 INFO: Seed: 944668141 00:06:13.953 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:13.953 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:13.953 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:13.953 INFO: A corpus is not provided, starting from an empty corpus 00:06:13.953 #2 INITED exec/s: 0 rss: 64Mb 00:06:13.953 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:13.953 This may also happen if the target rejected all inputs we tried so far 00:06:13.953 [2024-07-12 13:32:02.360781] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:13.953 [2024-07-12 13:32:02.361236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf83af cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.953 [2024-07-12 13:32:02.361278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.953 NEW_FUNC[1/696]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:13.953 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:13.953 #20 NEW cov: 11929 ft: 11928 corp: 2/8b lim: 30 exec/s: 0 rss: 70Mb L: 7/7 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:06:14.215 [2024-07-12 13:32:02.551448] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a9af 00:06:14.215 [2024-07-12 13:32:02.551963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf83af cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.552013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.215 #21 NEW cov: 12059 ft: 12437 corp: 3/15b lim: 30 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeBinInt- 00:06:14.215 [2024-07-12 13:32:02.631966] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.215 [2024-07-12 13:32:02.632221] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.215 [2024-07-12 13:32:02.632466] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.215 [2024-07-12 13:32:02.632706] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1af 00:06:14.215 [2024-07-12 13:32:02.633192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.633222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.215 [2024-07-12 13:32:02.633330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.633346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.215 [2024-07-12 13:32:02.633463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.633481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.215 [2024-07-12 13:32:02.633602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.633619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.215 #22 NEW cov: 12065 ft: 13396 corp: 4/43b lim: 30 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:14.215 [2024-07-12 13:32:02.712295] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.215 [2024-07-12 13:32:02.712537] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.215 [2024-07-12 13:32:02.712779] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.215 [2024-07-12 13:32:02.713017] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:14.215 [2024-07-12 13:32:02.713499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.713530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.215 [2024-07-12 13:32:02.713641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.713661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.215 [2024-07-12 13:32:02.713769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.713787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.215 [2024-07-12 13:32:02.713908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff838a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.713925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.215 #28 NEW cov: 12150 ft: 13626 corp: 5/70b lim: 30 exec/s: 0 rss: 70Mb L: 27/28 MS: 1 InsertRepeatedBytes- 00:06:14.215 [2024-07-12 13:32:02.782533] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:14.215 [2024-07-12 13:32:02.783225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.783258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.215 [2024-07-12 13:32:02.783377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.215 [2024-07-12 13:32:02.783393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.476 #30 NEW cov: 12190 ft: 14042 corp: 6/85b lim: 30 exec/s: 0 rss: 70Mb L: 15/28 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:14.476 [2024-07-12 13:32:02.842727] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:14.476 [2024-07-12 13:32:02.843436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.843466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.476 [2024-07-12 13:32:02.843581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.843597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.476 #31 NEW cov: 12190 ft: 14101 corp: 7/100b lim: 30 exec/s: 0 rss: 70Mb L: 15/28 MS: 1 ShuffleBytes- 00:06:14.476 [2024-07-12 13:32:02.913016] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:14.476 [2024-07-12 13:32:02.913722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.913751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.476 [2024-07-12 13:32:02.913877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.913895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.476 #32 NEW cov: 12190 ft: 14216 corp: 8/115b lim: 30 exec/s: 0 rss: 72Mb L: 15/28 MS: 1 ChangeByte- 00:06:14.476 [2024-07-12 13:32:02.993488] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.476 [2024-07-12 13:32:02.993738] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.476 [2024-07-12 13:32:02.993979] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.476 [2024-07-12 13:32:02.994221] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1af 00:06:14.476 [2024-07-12 13:32:02.994706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.994736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.476 [2024-07-12 13:32:02.994852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.994868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.476 [2024-07-12 13:32:02.994985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.995002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.476 [2024-07-12 13:32:02.995112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d18130 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.476 [2024-07-12 13:32:02.995128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.476 #33 NEW cov: 12190 ft: 14280 corp: 9/143b lim: 30 exec/s: 0 rss: 72Mb L: 28/28 MS: 1 ChangeByte- 00:06:14.737 [2024-07-12 13:32:03.073790] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.737 [2024-07-12 13:32:03.074056] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:06:14.737 [2024-07-12 13:32:03.074304] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.737 [2024-07-12 13:32:03.074544] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:14.737 [2024-07-12 13:32:03.075034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.075064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.075180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.075195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.075318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.075338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.075457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff838a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.075472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.737 #34 NEW cov: 12190 ft: 14302 corp: 10/170b lim: 30 exec/s: 0 rss: 72Mb L: 27/28 MS: 1 ChangeBinInt- 00:06:14.737 [2024-07-12 13:32:03.153892] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:14.737 [2024-07-12 13:32:03.154138] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x32 00:06:14.737 [2024-07-12 13:32:03.154612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.154641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.154762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.154779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.737 #35 NEW cov: 12190 ft: 14354 corp: 11/186b lim: 30 exec/s: 0 rss: 72Mb L: 16/28 MS: 1 InsertByte- 00:06:14.737 [2024-07-12 13:32:03.214354] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.737 [2024-07-12 13:32:03.214605] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.737 [2024-07-12 13:32:03.214861] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.737 [2024-07-12 13:32:03.215095] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1af 00:06:14.737 [2024-07-12 13:32:03.215586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.215617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.215739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.215754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.215872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.215888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.216005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d18130 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.216020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.737 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:14.737 #36 NEW cov: 12213 ft: 14527 corp: 12/215b lim: 30 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 InsertByte- 00:06:14.737 [2024-07-12 13:32:03.294634] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.737 [2024-07-12 13:32:03.294881] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff7a 00:06:14.737 [2024-07-12 13:32:03.295109] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.737 [2024-07-12 13:32:03.295357] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:14.737 [2024-07-12 13:32:03.295834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.295866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.295990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.296005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.296119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.296134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.737 [2024-07-12 13:32:03.296263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff838a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.737 [2024-07-12 13:32:03.296287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.998 #37 NEW cov: 12213 ft: 14626 corp: 13/242b lim: 30 exec/s: 37 rss: 72Mb L: 27/29 MS: 1 ChangeByte- 00:06:14.998 [2024-07-12 13:32:03.354899] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.998 [2024-07-12 13:32:03.355145] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.998 [2024-07-12 13:32:03.355389] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.998 [2024-07-12 13:32:03.355635] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1af 00:06:14.998 [2024-07-12 13:32:03.356088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.356118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.356236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.356252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.356366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.356381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.356502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d18130 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.356519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.998 #38 NEW cov: 12213 ft: 14642 corp: 14/271b lim: 30 exec/s: 38 rss: 72Mb L: 29/29 MS: 1 CrossOver- 00:06:14.998 [2024-07-12 13:32:03.435017] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000afaf 00:06:14.998 [2024-07-12 13:32:03.435475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81af cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.435506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.998 #49 NEW cov: 12213 ft: 14667 corp: 15/277b lim: 30 exec/s: 49 rss: 72Mb L: 6/29 MS: 1 EraseBytes- 00:06:14.998 [2024-07-12 13:32:03.495565] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.998 [2024-07-12 13:32:03.495811] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:14.998 [2024-07-12 13:32:03.496051] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000afaf 00:06:14.998 [2024-07-12 13:32:03.496511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.496540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.496657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.496676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.496782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.496796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.998 #50 NEW cov: 12213 ft: 14902 corp: 16/299b lim: 30 exec/s: 50 rss: 72Mb L: 22/29 MS: 1 EraseBytes- 00:06:14.998 [2024-07-12 13:32:03.555806] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.998 [2024-07-12 13:32:03.556047] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.998 [2024-07-12 13:32:03.556290] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:14.998 [2024-07-12 13:32:03.556533] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1af 00:06:14.998 [2024-07-12 13:32:03.557035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.557064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.557171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.557185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.557297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.557313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.998 [2024-07-12 13:32:03.557436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d18130 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.998 [2024-07-12 13:32:03.557452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.259 #51 NEW cov: 12213 ft: 14974 corp: 17/328b lim: 30 exec/s: 51 rss: 72Mb L: 29/29 MS: 1 ChangeByte- 00:06:15.259 [2024-07-12 13:32:03.636134] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.259 [2024-07-12 13:32:03.636387] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.259 [2024-07-12 13:32:03.636625] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.259 [2024-07-12 13:32:03.636867] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1af 00:06:15.259 [2024-07-12 13:32:03.637349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.637379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.637496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.637511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.637626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.637643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.637752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d18135 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.637769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.259 #52 NEW cov: 12213 ft: 14984 corp: 18/356b lim: 30 exec/s: 52 rss: 72Mb L: 28/29 MS: 1 ChangeASCIIInt- 00:06:15.259 [2024-07-12 13:32:03.696138] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.259 [2024-07-12 13:32:03.696601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.696631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.259 #53 NEW cov: 12213 ft: 14991 corp: 19/362b lim: 30 exec/s: 53 rss: 72Mb L: 6/29 MS: 1 CrossOver- 00:06:15.259 [2024-07-12 13:32:03.756817] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.259 [2024-07-12 13:32:03.757069] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.259 [2024-07-12 13:32:03.757310] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.259 [2024-07-12 13:32:03.757553] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000035d1 00:06:15.259 [2024-07-12 13:32:03.757797] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000afaf 00:06:15.259 [2024-07-12 13:32:03.758273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.758303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.758416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.758430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.758550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.758565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.758686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.758702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.758818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:d1af81af cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.758833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:15.259 #54 NEW cov: 12213 ft: 15057 corp: 20/392b lim: 30 exec/s: 54 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:06:15.259 [2024-07-12 13:32:03.837086] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:15.259 [2024-07-12 13:32:03.837337] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:06:15.259 [2024-07-12 13:32:03.837576] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:15.259 [2024-07-12 13:32:03.837814] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:15.259 [2024-07-12 13:32:03.838291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.838322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.838436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.838452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.838570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.838588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.259 [2024-07-12 13:32:03.838699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff838a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.259 [2024-07-12 13:32:03.838715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.520 #60 NEW cov: 12213 ft: 15083 corp: 21/419b lim: 30 exec/s: 60 rss: 72Mb L: 27/30 MS: 1 CopyPart- 00:06:15.520 [2024-07-12 13:32:03.917369] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:15.520 [2024-07-12 13:32:03.917618] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff7a 00:06:15.520 [2024-07-12 13:32:03.917859] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:15.520 [2024-07-12 13:32:03.918101] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:15.520 [2024-07-12 13:32:03.918583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:03.918614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.520 [2024-07-12 13:32:03.918728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:03.918744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.520 [2024-07-12 13:32:03.918865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:03.918881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.520 [2024-07-12 13:32:03.918997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff838a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:03.919014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.520 #61 NEW cov: 12213 ft: 15091 corp: 22/446b lim: 30 exec/s: 61 rss: 73Mb L: 27/30 MS: 1 ChangeByte- 00:06:15.520 [2024-07-12 13:32:03.997521] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:15.520 [2024-07-12 13:32:03.997773] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x32 00:06:15.520 [2024-07-12 13:32:03.998240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:03.998269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.520 [2024-07-12 13:32:03.998392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:03.998408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.520 #62 NEW cov: 12213 ft: 15160 corp: 23/462b lim: 30 exec/s: 62 rss: 73Mb L: 16/30 MS: 1 ChangeBit- 00:06:15.520 [2024-07-12 13:32:04.067765] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:15.520 [2024-07-12 13:32:04.068521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:04.068550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.520 [2024-07-12 13:32:04.068671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.520 [2024-07-12 13:32:04.068688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.782 #63 NEW cov: 12213 ft: 15203 corp: 24/477b lim: 30 exec/s: 63 rss: 73Mb L: 15/30 MS: 1 CrossOver- 00:06:15.782 [2024-07-12 13:32:04.148165] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:15.782 [2024-07-12 13:32:04.148414] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000afaf 00:06:15.782 [2024-07-12 13:32:04.148891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8a8a83af cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.148918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.149037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a9af83af cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.149053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.782 #64 NEW cov: 12213 ft: 15223 corp: 25/491b lim: 30 exec/s: 64 rss: 73Mb L: 14/30 MS: 1 CrossOver- 00:06:15.782 [2024-07-12 13:32:04.208696] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.782 [2024-07-12 13:32:04.208933] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.782 [2024-07-12 13:32:04.209160] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.782 [2024-07-12 13:32:04.209397] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000035d1 00:06:15.782 [2024-07-12 13:32:04.209634] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000afaf 00:06:15.782 [2024-07-12 13:32:04.210114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.210142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.210263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.210281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.210400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.210417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.210537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.210553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.210667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:d1af81af cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.210685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:15.782 #65 NEW cov: 12213 ft: 15268 corp: 26/521b lim: 30 exec/s: 65 rss: 73Mb L: 30/30 MS: 1 ShuffleBytes- 00:06:15.782 [2024-07-12 13:32:04.288860] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.782 [2024-07-12 13:32:04.289108] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.782 [2024-07-12 13:32:04.289366] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1d1 00:06:15.782 [2024-07-12 13:32:04.289609] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d1af 00:06:15.782 [2024-07-12 13:32:04.290087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aaf81d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.290117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.290235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.290251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.290360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.290376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.290489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d1d181d1 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.290505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.782 #66 NEW cov: 12213 ft: 15278 corp: 27/549b lim: 30 exec/s: 66 rss: 73Mb L: 28/30 MS: 1 ChangeByte- 00:06:15.782 [2024-07-12 13:32:04.349079] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001b1b 00:06:15.782 [2024-07-12 13:32:04.349335] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001b1b 00:06:15.782 [2024-07-12 13:32:04.349574] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001b1b 00:06:15.782 [2024-07-12 13:32:04.349807] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001b1b 00:06:15.782 [2024-07-12 13:32:04.350294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1a1b831b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.350322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.350438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1b1b831b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.350456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.350564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:1b1b831b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.350581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.782 [2024-07-12 13:32:04.350698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:1b1b831b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.782 [2024-07-12 13:32:04.350712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:16.041 #70 NEW cov: 12213 ft: 15298 corp: 28/575b lim: 30 exec/s: 35 rss: 73Mb L: 26/30 MS: 4 CrossOver-ChangeBit-CopyPart-InsertRepeatedBytes- 00:06:16.042 #70 DONE cov: 12213 ft: 15298 corp: 28/575b lim: 30 exec/s: 35 rss: 73Mb 00:06:16.042 Done 70 runs in 2 second(s) 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:16.042 13:32:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:16.042 [2024-07-12 13:32:04.518539] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:16.042 [2024-07-12 13:32:04.518631] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438405 ] 00:06:16.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.302 [2024-07-12 13:32:04.675373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.302 [2024-07-12 13:32:04.728961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.302 [2024-07-12 13:32:04.790597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.302 [2024-07-12 13:32:04.806900] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:16.302 INFO: Running with entropic power schedule (0xFF, 100). 00:06:16.302 INFO: Seed: 3462657992 00:06:16.302 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:16.302 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:16.302 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:16.302 INFO: A corpus is not provided, starting from an empty corpus 00:06:16.302 #2 INITED exec/s: 0 rss: 63Mb 00:06:16.302 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:16.302 This may also happen if the target rejected all inputs we tried so far 00:06:16.302 [2024-07-12 13:32:04.861790] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.302 [2024-07-12 13:32:04.861899] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.302 [2024-07-12 13:32:04.862083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.302 [2024-07-12 13:32:04.862115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.302 [2024-07-12 13:32:04.862164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.302 [2024-07-12 13:32:04.862180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.562 NEW_FUNC[1/695]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:16.562 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:16.562 #4 NEW cov: 11884 ft: 11884 corp: 2/21b lim: 35 exec/s: 0 rss: 69Mb L: 20/20 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:16.562 [2024-07-12 13:32:04.992825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a2a2000a cdw11:a200a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.562 [2024-07-12 13:32:04.992908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.562 #5 NEW cov: 12024 ft: 13045 corp: 3/29b lim: 35 exec/s: 0 rss: 69Mb L: 8/20 MS: 1 InsertRepeatedBytes- 00:06:16.562 [2024-07-12 13:32:05.052220] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.563 [2024-07-12 13:32:05.052431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.563 [2024-07-12 13:32:05.052457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.563 #6 NEW cov: 12030 ft: 13240 corp: 4/40b lim: 35 exec/s: 0 rss: 69Mb L: 11/20 MS: 1 InsertRepeatedBytes- 00:06:16.563 [2024-07-12 13:32:05.092470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a2a2000a cdw11:a200a208 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.563 [2024-07-12 13:32:05.092495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.563 #7 NEW cov: 12115 ft: 13524 corp: 5/49b lim: 35 exec/s: 0 rss: 69Mb L: 9/20 MS: 1 InsertByte- 00:06:16.822 [2024-07-12 13:32:05.152607] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.822 [2024-07-12 13:32:05.152710] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.822 [2024-07-12 13:32:05.152896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.822 [2024-07-12 13:32:05.152927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.822 [2024-07-12 13:32:05.152974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.822 [2024-07-12 13:32:05.152987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.822 [2024-07-12 13:32:05.153032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.822 [2024-07-12 13:32:05.153044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.822 #12 NEW cov: 12115 ft: 13835 corp: 6/70b lim: 35 exec/s: 0 rss: 69Mb L: 21/21 MS: 5 InsertByte-ShuffleBytes-EraseBytes-ChangeBit-CrossOver- 00:06:16.822 [2024-07-12 13:32:05.192733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:22a2000a cdw11:a200a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.822 [2024-07-12 13:32:05.192758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.822 #13 NEW cov: 12115 ft: 13901 corp: 7/78b lim: 35 exec/s: 0 rss: 69Mb L: 8/21 MS: 1 ChangeBit- 00:06:16.822 [2024-07-12 13:32:05.242990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:22a2000a cdw11:a200a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.822 [2024-07-12 13:32:05.243018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.823 [2024-07-12 13:32:05.243060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000a2 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.823 [2024-07-12 13:32:05.243071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.823 #14 NEW cov: 12115 ft: 13980 corp: 8/94b lim: 35 exec/s: 0 rss: 69Mb L: 16/21 MS: 1 CrossOver- 00:06:16.823 [2024-07-12 13:32:05.303270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2236000a cdw11:36003636 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.823 [2024-07-12 13:32:05.303296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.823 [2024-07-12 13:32:05.303340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:36360036 cdw11:36003636 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.823 [2024-07-12 13:32:05.303351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.823 [2024-07-12 13:32:05.303395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:36360036 cdw11:36003636 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.823 [2024-07-12 13:32:05.303405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.823 #15 NEW cov: 12115 ft: 14049 corp: 9/121b lim: 35 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:16.823 [2024-07-12 13:32:05.352994] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.823 [2024-07-12 13:32:05.353183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.823 [2024-07-12 13:32:05.353207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.823 #16 NEW cov: 12115 ft: 14065 corp: 10/132b lim: 35 exec/s: 0 rss: 69Mb L: 11/27 MS: 1 ShuffleBytes- 00:06:17.082 [2024-07-12 13:32:05.413316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a2a2000a cdw11:a200a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.082 [2024-07-12 13:32:05.413340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.082 #17 NEW cov: 12115 ft: 14161 corp: 11/140b lim: 35 exec/s: 0 rss: 69Mb L: 8/27 MS: 1 ChangeByte- 00:06:17.082 [2024-07-12 13:32:05.453407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a2a2000a cdw11:a200a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.082 [2024-07-12 13:32:05.453432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.082 #18 NEW cov: 12115 ft: 14184 corp: 12/148b lim: 35 exec/s: 0 rss: 69Mb L: 8/27 MS: 1 ChangeByte- 00:06:17.082 [2024-07-12 13:32:05.493392] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.082 [2024-07-12 13:32:05.493588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:007a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.082 [2024-07-12 13:32:05.493612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.082 #19 NEW cov: 12115 ft: 14215 corp: 13/159b lim: 35 exec/s: 0 rss: 69Mb L: 11/27 MS: 1 ChangeByte- 00:06:17.082 [2024-07-12 13:32:05.533641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000000ca cdw11:0000ea00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.082 [2024-07-12 13:32:05.533669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.082 #24 NEW cov: 12115 ft: 14248 corp: 14/166b lim: 35 exec/s: 0 rss: 69Mb L: 7/27 MS: 5 CrossOver-ChangeByte-InsertByte-EraseBytes-CopyPart- 00:06:17.082 [2024-07-12 13:32:05.573740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000000ca cdw11:0000ea00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.082 [2024-07-12 13:32:05.573764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.082 #25 NEW cov: 12115 ft: 14293 corp: 15/174b lim: 35 exec/s: 0 rss: 70Mb L: 8/27 MS: 1 InsertByte- 00:06:17.082 [2024-07-12 13:32:05.633902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:a2a200a2 cdw11:a200a20a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.082 [2024-07-12 13:32:05.633927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.082 #26 NEW cov: 12115 ft: 14348 corp: 16/182b lim: 35 exec/s: 0 rss: 70Mb L: 8/27 MS: 1 ShuffleBytes- 00:06:17.343 [2024-07-12 13:32:05.673861] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.674059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:007a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.674084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.343 #27 NEW cov: 12115 ft: 14365 corp: 17/193b lim: 35 exec/s: 0 rss: 70Mb L: 11/27 MS: 1 ChangeBit- 00:06:17.343 [2024-07-12 13:32:05.734150] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.734256] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.734441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.734464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.343 [2024-07-12 13:32:05.734509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.734521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.343 [2024-07-12 13:32:05.734570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.734582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.343 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:17.343 #28 NEW cov: 12138 ft: 14442 corp: 18/214b lim: 35 exec/s: 0 rss: 70Mb L: 21/27 MS: 1 CopyPart- 00:06:17.343 [2024-07-12 13:32:05.794186] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.794293] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.794477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.794504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.343 [2024-07-12 13:32:05.794549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.794561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.343 #29 NEW cov: 12138 ft: 14452 corp: 19/231b lim: 35 exec/s: 0 rss: 70Mb L: 17/27 MS: 1 CopyPart- 00:06:17.343 [2024-07-12 13:32:05.844419] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.844523] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.844705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2600008a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.844736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.343 [2024-07-12 13:32:05.844778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.844790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.343 [2024-07-12 13:32:05.844833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.844845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.343 #30 NEW cov: 12138 ft: 14477 corp: 20/253b lim: 35 exec/s: 30 rss: 70Mb L: 22/27 MS: 1 InsertByte- 00:06:17.343 [2024-07-12 13:32:05.904471] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.343 [2024-07-12 13:32:05.904652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.343 [2024-07-12 13:32:05.904678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.604 #31 NEW cov: 12138 ft: 14486 corp: 21/264b lim: 35 exec/s: 31 rss: 70Mb L: 11/27 MS: 1 ShuffleBytes- 00:06:17.604 [2024-07-12 13:32:05.965009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7171000a cdw11:71007171 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:05.965035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.604 [2024-07-12 13:32:05.965078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:71710071 cdw11:71007171 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:05.965089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.604 [2024-07-12 13:32:05.965133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:71710071 cdw11:71007171 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:05.965143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.604 #32 NEW cov: 12138 ft: 14499 corp: 22/291b lim: 35 exec/s: 32 rss: 70Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:17.604 [2024-07-12 13:32:06.025290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e6e600e6 cdw11:e600e6e6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.025314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.604 [2024-07-12 13:32:06.025356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e6e600e6 cdw11:e600e6e6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.025366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.604 [2024-07-12 13:32:06.025411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:e6e600e6 cdw11:e600e6e6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.025425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.604 [2024-07-12 13:32:06.025467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:e6e600e6 cdw11:e600e6e6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.025478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.604 #33 NEW cov: 12138 ft: 15015 corp: 23/322b lim: 35 exec/s: 33 rss: 70Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:17.604 [2024-07-12 13:32:06.074933] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.604 [2024-07-12 13:32:06.075033] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.604 [2024-07-12 13:32:06.075220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.075249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.604 [2024-07-12 13:32:06.075296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.075308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.604 #34 NEW cov: 12138 ft: 15035 corp: 24/342b lim: 35 exec/s: 34 rss: 70Mb L: 20/31 MS: 1 ChangeBit- 00:06:17.604 [2024-07-12 13:32:06.135096] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.604 [2024-07-12 13:32:06.135287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.135311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.604 #35 NEW cov: 12138 ft: 15041 corp: 25/349b lim: 35 exec/s: 35 rss: 70Mb L: 7/31 MS: 1 EraseBytes- 00:06:17.604 [2024-07-12 13:32:06.175162] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.604 [2024-07-12 13:32:06.175354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:007a0000 cdw11:00007a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.604 [2024-07-12 13:32:06.175379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.865 #36 NEW cov: 12138 ft: 15050 corp: 26/360b lim: 35 exec/s: 36 rss: 70Mb L: 11/31 MS: 1 CopyPart- 00:06:17.865 [2024-07-12 13:32:06.225332] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.865 [2024-07-12 13:32:06.225434] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.865 [2024-07-12 13:32:06.225622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.865 [2024-07-12 13:32:06.225647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.865 [2024-07-12 13:32:06.225691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.865 [2024-07-12 13:32:06.225703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.865 #42 NEW cov: 12138 ft: 15075 corp: 27/377b lim: 35 exec/s: 42 rss: 70Mb L: 17/31 MS: 1 InsertRepeatedBytes- 00:06:17.865 [2024-07-12 13:32:06.285755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7171000a cdw11:71007171 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.865 [2024-07-12 13:32:06.285781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.865 [2024-07-12 13:32:06.285829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:71710071 cdw11:71007171 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.865 [2024-07-12 13:32:06.285839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.865 #43 NEW cov: 12138 ft: 15081 corp: 28/393b lim: 35 exec/s: 43 rss: 70Mb L: 16/31 MS: 1 EraseBytes- 00:06:17.865 [2024-07-12 13:32:06.345911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7171000a cdw11:71007126 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.865 [2024-07-12 13:32:06.345935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.865 [2024-07-12 13:32:06.345980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:71710071 cdw11:71007171 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.865 [2024-07-12 13:32:06.345991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.865 #44 NEW cov: 12138 ft: 15093 corp: 29/410b lim: 35 exec/s: 44 rss: 72Mb L: 17/31 MS: 1 InsertByte- 00:06:17.865 [2024-07-12 13:32:06.405806] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.865 [2024-07-12 13:32:06.405992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.865 [2024-07-12 13:32:06.406016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.865 #45 NEW cov: 12138 ft: 15101 corp: 30/421b lim: 35 exec/s: 45 rss: 72Mb L: 11/31 MS: 1 ChangeBinInt- 00:06:18.127 [2024-07-12 13:32:06.455968] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.127 [2024-07-12 13:32:06.456235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:7e007e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.127 [2024-07-12 13:32:06.456261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.127 [2024-07-12 13:32:06.456305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:7e7e007e cdw11:7e007e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.127 [2024-07-12 13:32:06.456316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.127 #46 NEW cov: 12138 ft: 15110 corp: 31/439b lim: 35 exec/s: 46 rss: 72Mb L: 18/31 MS: 1 InsertRepeatedBytes- 00:06:18.127 [2024-07-12 13:32:06.516099] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.127 [2024-07-12 13:32:06.516287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.127 [2024-07-12 13:32:06.516312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.127 #47 NEW cov: 12138 ft: 15124 corp: 32/449b lim: 35 exec/s: 47 rss: 72Mb L: 10/31 MS: 1 EraseBytes- 00:06:18.127 [2024-07-12 13:32:06.556363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:22a2000a cdw11:a200a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.127 [2024-07-12 13:32:06.556388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.127 #48 NEW cov: 12138 ft: 15142 corp: 33/456b lim: 35 exec/s: 48 rss: 72Mb L: 7/31 MS: 1 EraseBytes- 00:06:18.127 [2024-07-12 13:32:06.596454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:22a2000a cdw11:a200a2a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.127 [2024-07-12 13:32:06.596477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.127 #49 NEW cov: 12138 ft: 15153 corp: 34/463b lim: 35 exec/s: 49 rss: 72Mb L: 7/31 MS: 1 ChangeBinInt- 00:06:18.127 [2024-07-12 13:32:06.656640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:22a2000a cdw11:a200a2a2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.127 [2024-07-12 13:32:06.656663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.127 #50 NEW cov: 12138 ft: 15163 corp: 35/471b lim: 35 exec/s: 50 rss: 72Mb L: 8/31 MS: 1 ShuffleBytes- 00:06:18.127 [2024-07-12 13:32:06.696597] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.127 [2024-07-12 13:32:06.696784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:a2000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.127 [2024-07-12 13:32:06.696809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.388 #51 NEW cov: 12138 ft: 15179 corp: 36/484b lim: 35 exec/s: 51 rss: 72Mb L: 13/31 MS: 1 CrossOver- 00:06:18.388 [2024-07-12 13:32:06.736684] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.388 [2024-07-12 13:32:06.736871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:a2000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.388 [2024-07-12 13:32:06.736896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.388 #52 NEW cov: 12138 ft: 15194 corp: 37/497b lim: 35 exec/s: 52 rss: 72Mb L: 13/31 MS: 1 ShuffleBytes- 00:06:18.388 [2024-07-12 13:32:06.796870] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.388 [2024-07-12 13:32:06.796967] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.388 [2024-07-12 13:32:06.797156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.388 [2024-07-12 13:32:06.797180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.388 [2024-07-12 13:32:06.797226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.388 [2024-07-12 13:32:06.797243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.388 #53 NEW cov: 12138 ft: 15210 corp: 38/517b lim: 35 exec/s: 53 rss: 72Mb L: 20/31 MS: 1 ChangeBit- 00:06:18.388 [2024-07-12 13:32:06.857125] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.388 [2024-07-12 13:32:06.857574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.388 [2024-07-12 13:32:06.857600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.388 [2024-07-12 13:32:06.857644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.388 [2024-07-12 13:32:06.857654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.388 [2024-07-12 13:32:06.857699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.388 [2024-07-12 13:32:06.857709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.388 [2024-07-12 13:32:06.857752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.388 [2024-07-12 13:32:06.857766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.388 #55 NEW cov: 12138 ft: 15227 corp: 39/545b lim: 35 exec/s: 27 rss: 72Mb L: 28/31 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:18.388 #55 DONE cov: 12138 ft: 15227 corp: 39/545b lim: 35 exec/s: 27 rss: 72Mb 00:06:18.388 Done 55 runs in 2 second(s) 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:18.650 13:32:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:18.650 [2024-07-12 13:32:07.023032] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:18.650 [2024-07-12 13:32:07.023115] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438764 ] 00:06:18.650 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.650 [2024-07-12 13:32:07.189391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.910 [2024-07-12 13:32:07.249399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.910 [2024-07-12 13:32:07.311745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.910 [2024-07-12 13:32:07.328074] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:18.910 INFO: Running with entropic power schedule (0xFF, 100). 00:06:18.910 INFO: Seed: 1688685401 00:06:18.910 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:18.910 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:18.910 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:18.910 INFO: A corpus is not provided, starting from an empty corpus 00:06:18.910 #2 INITED exec/s: 0 rss: 64Mb 00:06:18.910 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:18.910 This may also happen if the target rejected all inputs we tried so far 00:06:19.169 NEW_FUNC[1/683]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:19.169 NEW_FUNC[2/683]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:19.169 #3 NEW cov: 11779 ft: 11780 corp: 2/6b lim: 20 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377~"- 00:06:19.169 NEW_FUNC[1/1]: 0x101fb10 in spdk_sock_prep_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk_internal/sock.h:306 00:06:19.169 #7 NEW cov: 11912 ft: 12237 corp: 3/12b lim: 20 exec/s: 0 rss: 70Mb L: 6/6 MS: 4 EraseBytes-ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:19.169 #8 NEW cov: 11932 ft: 12942 corp: 4/21b lim: 20 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\014"- 00:06:19.429 #9 NEW cov: 12017 ft: 13187 corp: 5/30b lim: 20 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CopyPart- 00:06:19.429 #11 NEW cov: 12034 ft: 13738 corp: 6/46b lim: 20 exec/s: 0 rss: 70Mb L: 16/16 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:19.429 #12 NEW cov: 12034 ft: 13827 corp: 7/52b lim: 20 exec/s: 0 rss: 70Mb L: 6/16 MS: 1 InsertByte- 00:06:19.429 #13 NEW cov: 12034 ft: 13998 corp: 8/69b lim: 20 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 InsertByte- 00:06:19.689 #14 NEW cov: 12034 ft: 14059 corp: 9/74b lim: 20 exec/s: 0 rss: 70Mb L: 5/17 MS: 1 EraseBytes- 00:06:19.689 #15 NEW cov: 12034 ft: 14123 corp: 10/80b lim: 20 exec/s: 0 rss: 70Mb L: 6/17 MS: 1 ChangeBit- 00:06:19.689 #16 NEW cov: 12034 ft: 14171 corp: 11/98b lim: 20 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 InsertByte- 00:06:19.689 [2024-07-12 13:32:08.208589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.689 [2024-07-12 13:32:08.208630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.689 NEW_FUNC[1/20]: 0x11db1b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3359 00:06:19.689 NEW_FUNC[2/20]: 0x11dbd30 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3301 00:06:19.689 #17 NEW cov: 12356 ft: 14575 corp: 12/114b lim: 20 exec/s: 0 rss: 72Mb L: 16/18 MS: 1 InsertRepeatedBytes- 00:06:19.949 #18 NEW cov: 12359 ft: 14635 corp: 13/123b lim: 20 exec/s: 0 rss: 72Mb L: 9/18 MS: 1 CopyPart- 00:06:19.949 #19 NEW cov: 12359 ft: 14657 corp: 14/134b lim: 20 exec/s: 19 rss: 72Mb L: 11/18 MS: 1 CrossOver- 00:06:19.949 #21 NEW cov: 12359 ft: 14670 corp: 15/144b lim: 20 exec/s: 21 rss: 72Mb L: 10/18 MS: 2 CopyPart-InsertRepeatedBytes- 00:06:19.949 #22 NEW cov: 12359 ft: 14717 corp: 16/161b lim: 20 exec/s: 22 rss: 72Mb L: 17/18 MS: 1 CrossOver- 00:06:20.209 #23 NEW cov: 12359 ft: 14724 corp: 17/170b lim: 20 exec/s: 23 rss: 72Mb L: 9/18 MS: 1 ShuffleBytes- 00:06:20.209 #24 NEW cov: 12359 ft: 14743 corp: 18/179b lim: 20 exec/s: 24 rss: 72Mb L: 9/18 MS: 1 CopyPart- 00:06:20.209 #25 NEW cov: 12359 ft: 14805 corp: 19/189b lim: 20 exec/s: 25 rss: 72Mb L: 10/18 MS: 1 ShuffleBytes- 00:06:20.209 #26 NEW cov: 12363 ft: 14972 corp: 20/204b lim: 20 exec/s: 26 rss: 72Mb L: 15/18 MS: 1 EraseBytes- 00:06:20.469 #27 NEW cov: 12363 ft: 14986 corp: 21/214b lim: 20 exec/s: 27 rss: 72Mb L: 10/18 MS: 1 InsertByte- 00:06:20.469 #28 NEW cov: 12363 ft: 15032 corp: 22/230b lim: 20 exec/s: 28 rss: 72Mb L: 16/18 MS: 1 ChangeByte- 00:06:20.469 #30 NEW cov: 12363 ft: 15063 corp: 23/235b lim: 20 exec/s: 30 rss: 72Mb L: 5/18 MS: 2 ChangeBit-PersAutoDict- DE: "\377\377\377~"- 00:06:20.469 #31 NEW cov: 12363 ft: 15133 corp: 24/254b lim: 20 exec/s: 31 rss: 72Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:20.729 [2024-07-12 13:32:09.061970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.730 [2024-07-12 13:32:09.062008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.730 #32 NEW cov: 12363 ft: 15157 corp: 25/271b lim: 20 exec/s: 32 rss: 72Mb L: 17/19 MS: 1 InsertByte- 00:06:20.730 #33 NEW cov: 12363 ft: 15199 corp: 26/289b lim: 20 exec/s: 33 rss: 72Mb L: 18/19 MS: 1 CopyPart- 00:06:20.730 #34 NEW cov: 12363 ft: 15220 corp: 27/305b lim: 20 exec/s: 34 rss: 72Mb L: 16/19 MS: 1 InsertRepeatedBytes- 00:06:20.730 [2024-07-12 13:32:09.272305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.730 [2024-07-12 13:32:09.272342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.991 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:20.991 #35 NEW cov: 12386 ft: 15277 corp: 28/318b lim: 20 exec/s: 35 rss: 73Mb L: 13/19 MS: 1 EraseBytes- 00:06:20.991 #36 NEW cov: 12386 ft: 15284 corp: 29/327b lim: 20 exec/s: 18 rss: 73Mb L: 9/19 MS: 1 ChangeByte- 00:06:20.991 #36 DONE cov: 12386 ft: 15284 corp: 29/327b lim: 20 exec/s: 18 rss: 73Mb 00:06:20.991 ###### Recommended dictionary. ###### 00:06:20.991 "\377\377\377~" # Uses: 1 00:06:20.991 "\000\000\000\014" # Uses: 0 00:06:20.991 ###### End of recommended dictionary. ###### 00:06:20.991 Done 36 runs in 2 second(s) 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:20.991 13:32:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:20.991 [2024-07-12 13:32:09.515344] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:20.991 [2024-07-12 13:32:09.515434] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2439433 ] 00:06:20.991 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.252 [2024-07-12 13:32:09.661135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.252 [2024-07-12 13:32:09.712970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.252 [2024-07-12 13:32:09.774459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.252 [2024-07-12 13:32:09.790754] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:21.252 INFO: Running with entropic power schedule (0xFF, 100). 00:06:21.252 INFO: Seed: 4151687536 00:06:21.252 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:21.252 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:21.252 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:21.252 INFO: A corpus is not provided, starting from an empty corpus 00:06:21.252 #2 INITED exec/s: 0 rss: 65Mb 00:06:21.252 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:21.252 This may also happen if the target rejected all inputs we tried so far 00:06:21.512 [2024-07-12 13:32:09.857646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.512 [2024-07-12 13:32:09.857680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.512 NEW_FUNC[1/695]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:21.512 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:21.512 #10 NEW cov: 11905 ft: 11906 corp: 2/10b lim: 35 exec/s: 0 rss: 72Mb L: 9/9 MS: 3 ChangeBit-ChangeBit-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:21.512 [2024-07-12 13:32:10.048198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.512 [2024-07-12 13:32:10.048263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.772 NEW_FUNC[1/1]: 0x1889c00 in nvme_tcp_read_data /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h:412 00:06:21.772 #11 NEW cov: 12036 ft: 12425 corp: 3/19b lim: 35 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:21.772 [2024-07-12 13:32:10.128798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6565e265 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.772 [2024-07-12 13:32:10.128832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.772 [2024-07-12 13:32:10.128945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.772 [2024-07-12 13:32:10.128959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.772 #21 NEW cov: 12042 ft: 13420 corp: 4/38b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 5 InsertByte-CopyPart-CrossOver-EraseBytes-InsertRepeatedBytes- 00:06:21.772 [2024-07-12 13:32:10.188806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.772 [2024-07-12 13:32:10.188836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.772 #22 NEW cov: 12127 ft: 13620 corp: 5/47b lim: 35 exec/s: 0 rss: 72Mb L: 9/19 MS: 1 ChangeBinInt- 00:06:21.772 [2024-07-12 13:32:10.249041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.772 [2024-07-12 13:32:10.249068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.772 #23 NEW cov: 12127 ft: 13856 corp: 6/57b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 CrossOver- 00:06:21.772 [2024-07-12 13:32:10.319300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01ff cdw11:ff0a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.772 [2024-07-12 13:32:10.319328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.032 #24 NEW cov: 12127 ft: 13936 corp: 7/67b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 ShuffleBytes- 00:06:22.032 [2024-07-12 13:32:10.389672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.032 [2024-07-12 13:32:10.389701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.032 #26 NEW cov: 12127 ft: 13964 corp: 8/76b lim: 35 exec/s: 0 rss: 72Mb L: 9/19 MS: 2 ChangeBit-PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:22.032 [2024-07-12 13:32:10.450156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:32000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.033 [2024-07-12 13:32:10.450185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.033 #27 NEW cov: 12127 ft: 13986 corp: 9/86b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 InsertByte- 00:06:22.033 [2024-07-12 13:32:10.510519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000201 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.033 [2024-07-12 13:32:10.510546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.033 #28 NEW cov: 12127 ft: 14005 corp: 10/95b lim: 35 exec/s: 0 rss: 72Mb L: 9/19 MS: 1 ChangeBinInt- 00:06:22.033 [2024-07-12 13:32:10.580966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000bf01 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.033 [2024-07-12 13:32:10.580993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.293 #29 NEW cov: 12127 ft: 14047 corp: 11/104b lim: 35 exec/s: 0 rss: 72Mb L: 9/19 MS: 1 ChangeByte- 00:06:22.293 [2024-07-12 13:32:10.651369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001bf01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.293 [2024-07-12 13:32:10.651396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.293 #30 NEW cov: 12127 ft: 14051 corp: 12/113b lim: 35 exec/s: 0 rss: 73Mb L: 9/19 MS: 1 ShuffleBytes- 00:06:22.293 [2024-07-12 13:32:10.721910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0100bf00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.293 [2024-07-12 13:32:10.721937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.293 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:22.293 #31 NEW cov: 12150 ft: 14089 corp: 13/120b lim: 35 exec/s: 0 rss: 73Mb L: 7/19 MS: 1 EraseBytes- 00:06:22.293 [2024-07-12 13:32:10.782610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6565e265 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.293 [2024-07-12 13:32:10.782641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.293 [2024-07-12 13:32:10.782753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:45650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.293 [2024-07-12 13:32:10.782769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.293 #32 NEW cov: 12150 ft: 14137 corp: 14/139b lim: 35 exec/s: 32 rss: 73Mb L: 19/19 MS: 1 ChangeBit- 00:06:22.293 [2024-07-12 13:32:10.863097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6565e265 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.293 [2024-07-12 13:32:10.863126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.293 [2024-07-12 13:32:10.863236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.293 [2024-07-12 13:32:10.863253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.552 #33 NEW cov: 12150 ft: 14145 corp: 15/158b lim: 35 exec/s: 33 rss: 73Mb L: 19/19 MS: 1 ShuffleBytes- 00:06:22.552 [2024-07-12 13:32:10.923223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00400100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.552 [2024-07-12 13:32:10.923257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.552 #34 NEW cov: 12150 ft: 14271 corp: 16/167b lim: 35 exec/s: 34 rss: 73Mb L: 9/19 MS: 1 ChangeBit- 00:06:22.552 [2024-07-12 13:32:10.983601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.552 [2024-07-12 13:32:10.983629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.552 #35 NEW cov: 12150 ft: 14316 corp: 17/176b lim: 35 exec/s: 35 rss: 73Mb L: 9/19 MS: 1 ShuffleBytes- 00:06:22.552 [2024-07-12 13:32:11.043954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01bf cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.552 [2024-07-12 13:32:11.043983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.552 #36 NEW cov: 12150 ft: 14336 corp: 18/185b lim: 35 exec/s: 36 rss: 73Mb L: 9/19 MS: 1 ChangeBit- 00:06:22.552 [2024-07-12 13:32:11.104408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:09000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.552 [2024-07-12 13:32:11.104438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.552 #37 NEW cov: 12150 ft: 14357 corp: 19/194b lim: 35 exec/s: 37 rss: 73Mb L: 9/19 MS: 1 ChangeBinInt- 00:06:22.812 [2024-07-12 13:32:11.164799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:9d000201 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.812 [2024-07-12 13:32:11.164826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.812 #38 NEW cov: 12150 ft: 14365 corp: 20/204b lim: 35 exec/s: 38 rss: 73Mb L: 10/19 MS: 1 InsertByte- 00:06:22.812 [2024-07-12 13:32:11.225483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6565e265 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.812 [2024-07-12 13:32:11.225512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.812 [2024-07-12 13:32:11.225629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:650a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.812 [2024-07-12 13:32:11.225645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.812 #39 NEW cov: 12150 ft: 14378 corp: 21/219b lim: 35 exec/s: 39 rss: 73Mb L: 15/19 MS: 1 EraseBytes- 00:06:22.812 [2024-07-12 13:32:11.295527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01ff cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.812 [2024-07-12 13:32:11.295554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.812 #40 NEW cov: 12150 ft: 14390 corp: 22/229b lim: 35 exec/s: 40 rss: 73Mb L: 10/19 MS: 1 CrossOver- 00:06:22.812 [2024-07-12 13:32:11.355713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:65656565 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.812 [2024-07-12 13:32:11.355742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.812 #41 NEW cov: 12150 ft: 14409 corp: 23/238b lim: 35 exec/s: 41 rss: 73Mb L: 9/19 MS: 1 CrossOver- 00:06:23.072 [2024-07-12 13:32:11.416055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01ff cdw11:ff0a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.072 [2024-07-12 13:32:11.416083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.072 #42 NEW cov: 12150 ft: 14444 corp: 24/248b lim: 35 exec/s: 42 rss: 73Mb L: 10/19 MS: 1 CrossOver- 00:06:23.072 [2024-07-12 13:32:11.486507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.072 [2024-07-12 13:32:11.486539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.072 #43 NEW cov: 12150 ft: 14463 corp: 25/258b lim: 35 exec/s: 43 rss: 73Mb L: 10/19 MS: 1 CrossOver- 00:06:23.072 [2024-07-12 13:32:11.547451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff4b01ff cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.072 [2024-07-12 13:32:11.547479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.072 [2024-07-12 13:32:11.547592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.072 [2024-07-12 13:32:11.547608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.072 [2024-07-12 13:32:11.547720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.072 [2024-07-12 13:32:11.547734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.072 #45 NEW cov: 12150 ft: 14701 corp: 26/282b lim: 35 exec/s: 45 rss: 73Mb L: 24/24 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:23.072 [2024-07-12 13:32:11.627174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fff701ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.072 [2024-07-12 13:32:11.627203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.072 #46 NEW cov: 12150 ft: 14702 corp: 27/292b lim: 35 exec/s: 46 rss: 73Mb L: 10/24 MS: 1 ChangeBinInt- 00:06:23.333 [2024-07-12 13:32:11.687840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6565e265 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.333 [2024-07-12 13:32:11.687866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.333 [2024-07-12 13:32:11.687968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:650a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.333 [2024-07-12 13:32:11.687983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.333 #47 NEW cov: 12150 ft: 14721 corp: 28/308b lim: 35 exec/s: 47 rss: 73Mb L: 16/24 MS: 1 InsertByte- 00:06:23.333 [2024-07-12 13:32:11.757918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff01bf cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.333 [2024-07-12 13:32:11.757946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.333 #48 NEW cov: 12150 ft: 14736 corp: 29/321b lim: 35 exec/s: 48 rss: 73Mb L: 13/24 MS: 1 CopyPart- 00:06:23.333 [2024-07-12 13:32:11.828918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:65c0e265 cdw11:62450000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.333 [2024-07-12 13:32:11.828947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.333 [2024-07-12 13:32:11.829062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2700f71a cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.333 [2024-07-12 13:32:11.829078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.333 [2024-07-12 13:32:11.829189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65650002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.333 [2024-07-12 13:32:11.829205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.333 #49 NEW cov: 12150 ft: 14745 corp: 30/348b lim: 35 exec/s: 24 rss: 73Mb L: 27/27 MS: 1 CMP- DE: "\300bE\017\367\032'\000"- 00:06:23.333 #49 DONE cov: 12150 ft: 14745 corp: 30/348b lim: 35 exec/s: 24 rss: 73Mb 00:06:23.333 ###### Recommended dictionary. ###### 00:06:23.333 "\001\000\000\000\000\000\000\000" # Uses: 1 00:06:23.333 "\300bE\017\367\032'\000" # Uses: 0 00:06:23.333 ###### End of recommended dictionary. ###### 00:06:23.333 Done 49 runs in 2 second(s) 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:23.594 13:32:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:23.594 [2024-07-12 13:32:11.994255] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:23.594 [2024-07-12 13:32:11.994347] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2439788 ] 00:06:23.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.594 [2024-07-12 13:32:12.140246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.854 [2024-07-12 13:32:12.193289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.854 [2024-07-12 13:32:12.254608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.854 [2024-07-12 13:32:12.270910] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:23.854 INFO: Running with entropic power schedule (0xFF, 100). 00:06:23.854 INFO: Seed: 2334720100 00:06:23.854 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:23.854 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:23.854 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:23.854 INFO: A corpus is not provided, starting from an empty corpus 00:06:23.854 #2 INITED exec/s: 0 rss: 65Mb 00:06:23.854 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:23.854 This may also happen if the target rejected all inputs we tried so far 00:06:23.854 [2024-07-12 13:32:12.331425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:261a3bff cdw11:f7570001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.854 [2024-07-12 13:32:12.331461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.114 NEW_FUNC[1/694]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:24.114 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:24.114 #20 NEW cov: 11891 ft: 11918 corp: 2/11b lim: 45 exec/s: 0 rss: 72Mb L: 10/10 MS: 3 CopyPart-InsertByte-CMP- DE: "\377&\032\367W9\323%"- 00:06:24.114 [2024-07-12 13:32:12.522880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.114 [2024-07-12 13:32:12.522929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.114 [2024-07-12 13:32:12.523060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.114 [2024-07-12 13:32:12.523079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.114 [2024-07-12 13:32:12.523193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.114 [2024-07-12 13:32:12.523211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.114 NEW_FUNC[1/2]: 0x1a77c30 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:546 00:06:24.114 NEW_FUNC[2/2]: 0x1a7ce10 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:898 00:06:24.114 #23 NEW cov: 12047 ft: 13319 corp: 3/41b lim: 45 exec/s: 0 rss: 72Mb L: 30/30 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:06:24.114 [2024-07-12 13:32:12.592198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:261a3bff cdw11:f7570001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.114 [2024-07-12 13:32:12.592235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.114 #24 NEW cov: 12053 ft: 13550 corp: 4/51b lim: 45 exec/s: 0 rss: 72Mb L: 10/30 MS: 1 ChangeByte- 00:06:24.114 [2024-07-12 13:32:12.663240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.114 [2024-07-12 13:32:12.663270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.114 [2024-07-12 13:32:12.663384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.114 [2024-07-12 13:32:12.663406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.114 [2024-07-12 13:32:12.663521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.114 [2024-07-12 13:32:12.663536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.374 #25 NEW cov: 12138 ft: 13844 corp: 5/81b lim: 45 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 ChangeBinInt- 00:06:24.374 [2024-07-12 13:32:12.742808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:261a3bff cdw11:f7570001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.374 [2024-07-12 13:32:12.742838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.374 #26 NEW cov: 12138 ft: 13996 corp: 6/91b lim: 45 exec/s: 0 rss: 72Mb L: 10/30 MS: 1 ChangeASCIIInt- 00:06:24.374 [2024-07-12 13:32:12.803820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.374 [2024-07-12 13:32:12.803850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.374 [2024-07-12 13:32:12.803963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.374 [2024-07-12 13:32:12.803979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.374 [2024-07-12 13:32:12.804093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.374 [2024-07-12 13:32:12.804111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.374 #27 NEW cov: 12138 ft: 14052 corp: 7/122b lim: 45 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 CopyPart- 00:06:24.374 [2024-07-12 13:32:12.864018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.374 [2024-07-12 13:32:12.864046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.374 [2024-07-12 13:32:12.864164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.374 [2024-07-12 13:32:12.864180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.375 [2024-07-12 13:32:12.864299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.375 [2024-07-12 13:32:12.864317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.375 #28 NEW cov: 12138 ft: 14140 corp: 8/152b lim: 45 exec/s: 0 rss: 72Mb L: 30/31 MS: 1 ShuffleBytes- 00:06:24.375 [2024-07-12 13:32:12.944708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.375 [2024-07-12 13:32:12.944737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.375 [2024-07-12 13:32:12.944844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.375 [2024-07-12 13:32:12.944860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.375 [2024-07-12 13:32:12.944970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1af7ff26 cdw11:57300006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.375 [2024-07-12 13:32:12.944987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.375 [2024-07-12 13:32:12.945102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.375 [2024-07-12 13:32:12.945117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.634 #29 NEW cov: 12138 ft: 14500 corp: 9/193b lim: 45 exec/s: 0 rss: 72Mb L: 41/41 MS: 1 CrossOver- 00:06:24.634 [2024-07-12 13:32:13.024101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.634 [2024-07-12 13:32:13.024130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.634 [2024-07-12 13:32:13.024251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.634 [2024-07-12 13:32:13.024267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.634 #30 NEW cov: 12138 ft: 14738 corp: 10/216b lim: 45 exec/s: 0 rss: 73Mb L: 23/41 MS: 1 EraseBytes- 00:06:24.634 [2024-07-12 13:32:13.104084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dcdc3bdc cdw11:dcdc0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.634 [2024-07-12 13:32:13.104112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.634 #31 NEW cov: 12138 ft: 14821 corp: 11/232b lim: 45 exec/s: 0 rss: 73Mb L: 16/41 MS: 1 InsertRepeatedBytes- 00:06:24.634 [2024-07-12 13:32:13.174289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dcdc3bdc cdw11:dcdc0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.634 [2024-07-12 13:32:13.174317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.893 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:24.893 #32 NEW cov: 12161 ft: 14913 corp: 12/248b lim: 45 exec/s: 0 rss: 73Mb L: 16/41 MS: 1 ChangeBinInt- 00:06:24.893 [2024-07-12 13:32:13.245656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.245683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.245794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.245810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.245924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.245939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.246054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.246071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.893 #33 NEW cov: 12161 ft: 14932 corp: 13/291b lim: 45 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 CopyPart- 00:06:24.893 [2024-07-12 13:32:13.305840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.305871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.305993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.306011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.306128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.306144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.306260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.306275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.893 #34 NEW cov: 12161 ft: 14957 corp: 14/334b lim: 45 exec/s: 34 rss: 73Mb L: 43/43 MS: 1 ChangeBit- 00:06:24.893 [2024-07-12 13:32:13.386520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.386551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.386670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.386685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.386797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.386815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.386927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.386944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.387051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.387068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.893 #35 NEW cov: 12161 ft: 15041 corp: 15/379b lim: 45 exec/s: 35 rss: 73Mb L: 45/45 MS: 1 CMP- DE: "\001\000"- 00:06:24.893 [2024-07-12 13:32:13.446005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.446033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.893 [2024-07-12 13:32:13.446151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.893 [2024-07-12 13:32:13.446167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.894 [2024-07-12 13:32:13.446283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.894 [2024-07-12 13:32:13.446304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.153 #36 NEW cov: 12161 ft: 15109 corp: 16/409b lim: 45 exec/s: 36 rss: 73Mb L: 30/45 MS: 1 ChangeByte- 00:06:25.153 [2024-07-12 13:32:13.506223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.506254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.153 [2024-07-12 13:32:13.506373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.506392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.153 [2024-07-12 13:32:13.506508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.506526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.153 #37 NEW cov: 12161 ft: 15128 corp: 17/441b lim: 45 exec/s: 37 rss: 73Mb L: 32/45 MS: 1 CopyPart- 00:06:25.153 [2024-07-12 13:32:13.566100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.566128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.153 [2024-07-12 13:32:13.566244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.566258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.153 #38 NEW cov: 12161 ft: 15172 corp: 18/464b lim: 45 exec/s: 38 rss: 73Mb L: 23/45 MS: 1 ChangeBit- 00:06:25.153 [2024-07-12 13:32:13.646742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.646770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.153 [2024-07-12 13:32:13.646884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.646899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.153 [2024-07-12 13:32:13.647013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.647030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.153 #39 NEW cov: 12161 ft: 15196 corp: 19/494b lim: 45 exec/s: 39 rss: 73Mb L: 30/45 MS: 1 ShuffleBytes- 00:06:25.153 [2024-07-12 13:32:13.706954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.706983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.153 [2024-07-12 13:32:13.707099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00220000 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.707115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.153 [2024-07-12 13:32:13.707225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.153 [2024-07-12 13:32:13.707248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.414 #40 NEW cov: 12161 ft: 15225 corp: 20/525b lim: 45 exec/s: 40 rss: 73Mb L: 31/45 MS: 1 ChangeBinInt- 00:06:25.414 [2024-07-12 13:32:13.767291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:222c0a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.767320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.767435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.767451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.767568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.767584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.414 #41 NEW cov: 12161 ft: 15256 corp: 21/558b lim: 45 exec/s: 41 rss: 73Mb L: 33/45 MS: 1 InsertByte- 00:06:25.414 [2024-07-12 13:32:13.848303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.848331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.848447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.848462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.848575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22f72222 cdw11:57300006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.848591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.848716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.848732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.848849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.848865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.414 #42 NEW cov: 12161 ft: 15276 corp: 22/603b lim: 45 exec/s: 42 rss: 73Mb L: 45/45 MS: 1 CrossOver- 00:06:25.414 [2024-07-12 13:32:13.927839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:222c0a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.927867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.927986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.928002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.414 [2024-07-12 13:32:13.928109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.414 [2024-07-12 13:32:13.928128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.414 #43 NEW cov: 12161 ft: 15298 corp: 23/636b lim: 45 exec/s: 43 rss: 73Mb L: 33/45 MS: 1 ChangeBinInt- 00:06:25.674 [2024-07-12 13:32:14.008170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.008199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.674 [2024-07-12 13:32:14.008324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00220000 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.008341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.674 [2024-07-12 13:32:14.008455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.008469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.674 #44 NEW cov: 12161 ft: 15301 corp: 24/665b lim: 45 exec/s: 44 rss: 73Mb L: 29/45 MS: 1 EraseBytes- 00:06:25.674 [2024-07-12 13:32:14.088129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.088159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.674 [2024-07-12 13:32:14.088277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.088293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.674 [2024-07-12 13:32:14.158385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.158414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.674 [2024-07-12 13:32:14.158538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:2222220a cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.158551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.674 #46 NEW cov: 12161 ft: 15309 corp: 25/688b lim: 45 exec/s: 46 rss: 73Mb L: 23/45 MS: 2 ChangeBinInt-ShuffleBytes- 00:06:25.674 [2024-07-12 13:32:14.218234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dcdc3bdc cdw11:dcdc0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.674 [2024-07-12 13:32:14.218262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.674 #47 NEW cov: 12161 ft: 15317 corp: 26/704b lim: 45 exec/s: 47 rss: 73Mb L: 16/45 MS: 1 ShuffleBytes- 00:06:25.935 [2024-07-12 13:32:14.279294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:22220a0a cdw11:0a220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.935 [2024-07-12 13:32:14.279323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.935 [2024-07-12 13:32:14.279434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.935 [2024-07-12 13:32:14.279449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.935 [2024-07-12 13:32:14.279574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.935 [2024-07-12 13:32:14.279592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.935 #48 NEW cov: 12161 ft: 15359 corp: 27/735b lim: 45 exec/s: 24 rss: 73Mb L: 31/45 MS: 1 CrossOver- 00:06:25.935 #48 DONE cov: 12161 ft: 15359 corp: 27/735b lim: 45 exec/s: 24 rss: 73Mb 00:06:25.935 ###### Recommended dictionary. ###### 00:06:25.935 "\377&\032\367W9\323%" # Uses: 0 00:06:25.935 "\001\000" # Uses: 0 00:06:25.935 ###### End of recommended dictionary. ###### 00:06:25.935 Done 48 runs in 2 second(s) 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:25.935 13:32:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:25.935 [2024-07-12 13:32:14.443256] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:25.935 [2024-07-12 13:32:14.443348] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440461 ] 00:06:25.935 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.196 [2024-07-12 13:32:14.614848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.196 [2024-07-12 13:32:14.671136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.196 [2024-07-12 13:32:14.732771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.196 [2024-07-12 13:32:14.749087] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:26.196 INFO: Running with entropic power schedule (0xFF, 100). 00:06:26.196 INFO: Seed: 519754028 00:06:26.456 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:26.456 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:26.456 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:26.456 INFO: A corpus is not provided, starting from an empty corpus 00:06:26.456 #2 INITED exec/s: 0 rss: 61Mb 00:06:26.456 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:26.456 This may also happen if the target rejected all inputs we tried so far 00:06:26.456 [2024-07-12 13:32:14.804097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:26.456 [2024-07-12 13:32:14.804127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.456 NEW_FUNC[1/694]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:26.456 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:26.456 #3 NEW cov: 11834 ft: 11835 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CopyPart- 00:06:26.456 [2024-07-12 13:32:14.984805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:26.456 [2024-07-12 13:32:14.984859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.456 #4 NEW cov: 11964 ft: 12433 corp: 3/5b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CrossOver- 00:06:26.456 [2024-07-12 13:32:15.034642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:06:26.456 [2024-07-12 13:32:15.034666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.715 #5 NEW cov: 11970 ft: 12634 corp: 4/7b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:06:26.715 [2024-07-12 13:32:15.094769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e8de cdw11:00000000 00:06:26.715 [2024-07-12 13:32:15.094796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.715 #9 NEW cov: 12055 ft: 12851 corp: 5/9b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 4 EraseBytes-ChangeBit-ChangeByte-InsertByte- 00:06:26.715 [2024-07-12 13:32:15.144917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:26.715 [2024-07-12 13:32:15.144941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.715 #11 NEW cov: 12055 ft: 12890 corp: 6/11b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 2 CopyPart-CopyPart- 00:06:26.716 [2024-07-12 13:32:15.185016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:06:26.716 [2024-07-12 13:32:15.185041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.716 #12 NEW cov: 12055 ft: 12983 corp: 7/13b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:06:26.716 [2024-07-12 13:32:15.245200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:06:26.716 [2024-07-12 13:32:15.245224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.716 #13 NEW cov: 12055 ft: 13038 corp: 8/15b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:06:26.975 [2024-07-12 13:32:15.305356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007ade cdw11:00000000 00:06:26.975 [2024-07-12 13:32:15.305380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.975 #14 NEW cov: 12055 ft: 13129 corp: 9/17b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeByte- 00:06:26.975 [2024-07-12 13:32:15.365858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007ade cdw11:00000000 00:06:26.975 [2024-07-12 13:32:15.365886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.975 [2024-07-12 13:32:15.365927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000620c cdw11:00000000 00:06:26.975 [2024-07-12 13:32:15.365937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.975 [2024-07-12 13:32:15.365980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006a09 cdw11:00000000 00:06:26.975 [2024-07-12 13:32:15.365991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.975 [2024-07-12 13:32:15.366031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000f91a cdw11:00000000 00:06:26.975 [2024-07-12 13:32:15.366041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.975 [2024-07-12 13:32:15.366082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00002700 cdw11:00000000 00:06:26.975 [2024-07-12 13:32:15.366092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:26.976 #15 NEW cov: 12055 ft: 13509 corp: 10/27b lim: 10 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 CMP- DE: "b\014j\011\371\032'\000"- 00:06:26.976 [2024-07-12 13:32:15.425641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:06:26.976 [2024-07-12 13:32:15.425666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.976 #17 NEW cov: 12055 ft: 13546 corp: 11/29b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 2 ChangeBit-CopyPart- 00:06:26.976 [2024-07-12 13:32:15.465829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a7a cdw11:00000000 00:06:26.976 [2024-07-12 13:32:15.465853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.976 [2024-07-12 13:32:15.465892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000ade cdw11:00000000 00:06:26.976 [2024-07-12 13:32:15.465902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.976 #18 NEW cov: 12055 ft: 13719 corp: 12/33b lim: 10 exec/s: 0 rss: 69Mb L: 4/10 MS: 1 CrossOver- 00:06:26.976 [2024-07-12 13:32:15.525914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002be8 cdw11:00000000 00:06:26.976 [2024-07-12 13:32:15.525938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.976 #19 NEW cov: 12055 ft: 13734 corp: 13/36b lim: 10 exec/s: 0 rss: 69Mb L: 3/10 MS: 1 InsertByte- 00:06:27.235 [2024-07-12 13:32:15.566038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7a cdw11:00000000 00:06:27.235 [2024-07-12 13:32:15.566062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.235 #20 NEW cov: 12055 ft: 13742 corp: 14/39b lim: 10 exec/s: 0 rss: 69Mb L: 3/10 MS: 1 CrossOver- 00:06:27.235 [2024-07-12 13:32:15.606137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a7a cdw11:00000000 00:06:27.235 [2024-07-12 13:32:15.606161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.235 #21 NEW cov: 12055 ft: 13772 corp: 15/42b lim: 10 exec/s: 0 rss: 69Mb L: 3/10 MS: 1 ChangeBit- 00:06:27.235 [2024-07-12 13:32:15.666266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a2a cdw11:00000000 00:06:27.235 [2024-07-12 13:32:15.666295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.235 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:27.235 #22 NEW cov: 12078 ft: 13822 corp: 16/44b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 CopyPart- 00:06:27.235 [2024-07-12 13:32:15.726437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a8a cdw11:00000000 00:06:27.235 [2024-07-12 13:32:15.726464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.235 #23 NEW cov: 12078 ft: 13831 corp: 17/46b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 ChangeBit- 00:06:27.236 [2024-07-12 13:32:15.766646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a8a cdw11:00000000 00:06:27.236 [2024-07-12 13:32:15.766671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.236 [2024-07-12 13:32:15.766711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007a0a cdw11:00000000 00:06:27.236 [2024-07-12 13:32:15.766722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.236 #25 NEW cov: 12078 ft: 13859 corp: 18/50b lim: 10 exec/s: 25 rss: 69Mb L: 4/10 MS: 2 CopyPart-CrossOver- 00:06:27.236 [2024-07-12 13:32:15.806665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f677 cdw11:00000000 00:06:27.236 [2024-07-12 13:32:15.806691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.496 #26 NEW cov: 12078 ft: 13932 corp: 19/52b lim: 10 exec/s: 26 rss: 69Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:27.496 [2024-07-12 13:32:15.866802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.866828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.496 #27 NEW cov: 12078 ft: 14001 corp: 20/55b lim: 10 exec/s: 27 rss: 69Mb L: 3/10 MS: 1 CopyPart- 00:06:27.496 [2024-07-12 13:32:15.907211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a62 cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.907240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.496 [2024-07-12 13:32:15.907279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000c6a cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.907289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.496 [2024-07-12 13:32:15.907330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000009f9 cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.907341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.496 [2024-07-12 13:32:15.907381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001a27 cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.907390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.496 #29 NEW cov: 12078 ft: 14056 corp: 21/64b lim: 10 exec/s: 29 rss: 69Mb L: 9/10 MS: 2 EraseBytes-PersAutoDict- DE: "b\014j\011\371\032'\000"- 00:06:27.496 [2024-07-12 13:32:15.957151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ae3 cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.957175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.496 [2024-07-12 13:32:15.957217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.957236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.496 #30 NEW cov: 12078 ft: 14069 corp: 22/68b lim: 10 exec/s: 30 rss: 69Mb L: 4/10 MS: 1 InsertRepeatedBytes- 00:06:27.496 [2024-07-12 13:32:15.997260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002b2a cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.997284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.496 [2024-07-12 13:32:15.997326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e8de cdw11:00000000 00:06:27.496 [2024-07-12 13:32:15.997337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.496 #31 NEW cov: 12078 ft: 14087 corp: 23/72b lim: 10 exec/s: 31 rss: 69Mb L: 4/10 MS: 1 CrossOver- 00:06:27.496 [2024-07-12 13:32:16.057334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a8a cdw11:00000000 00:06:27.496 [2024-07-12 13:32:16.057358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.756 #32 NEW cov: 12078 ft: 14110 corp: 24/74b lim: 10 exec/s: 32 rss: 69Mb L: 2/10 MS: 1 ChangeBit- 00:06:27.756 [2024-07-12 13:32:16.097539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.097564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.756 [2024-07-12 13:32:16.097605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008a7a cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.097615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.756 #33 NEW cov: 12078 ft: 14153 corp: 25/79b lim: 10 exec/s: 33 rss: 69Mb L: 5/10 MS: 1 CrossOver- 00:06:27.756 [2024-07-12 13:32:16.157590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a6d cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.157614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.756 #34 NEW cov: 12078 ft: 14164 corp: 26/82b lim: 10 exec/s: 34 rss: 69Mb L: 3/10 MS: 1 InsertByte- 00:06:27.756 [2024-07-12 13:32:16.197787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.197811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.756 [2024-07-12 13:32:16.197854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008a7a cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.197864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.756 #35 NEW cov: 12078 ft: 14180 corp: 27/87b lim: 10 exec/s: 35 rss: 69Mb L: 5/10 MS: 1 CopyPart- 00:06:27.756 [2024-07-12 13:32:16.257948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d5e3 cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.257972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.756 [2024-07-12 13:32:16.258013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.258023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.756 #36 NEW cov: 12078 ft: 14187 corp: 28/91b lim: 10 exec/s: 36 rss: 69Mb L: 4/10 MS: 1 ChangeByte- 00:06:27.756 [2024-07-12 13:32:16.318325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000ae3 cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.318352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.756 [2024-07-12 13:32:16.318393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e389 cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.318403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.756 [2024-07-12 13:32:16.318444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008989 cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.318454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.756 [2024-07-12 13:32:16.318496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00008989 cdw11:00000000 00:06:27.756 [2024-07-12 13:32:16.318506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.033 #37 NEW cov: 12078 ft: 14201 corp: 29/100b lim: 10 exec/s: 37 rss: 69Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:28.033 [2024-07-12 13:32:16.368127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:06:28.033 [2024-07-12 13:32:16.368150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.033 #38 NEW cov: 12078 ft: 14214 corp: 30/103b lim: 10 exec/s: 38 rss: 69Mb L: 3/10 MS: 1 CMP- DE: "\003\000"- 00:06:28.033 [2024-07-12 13:32:16.408369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:28.033 [2024-07-12 13:32:16.408393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.033 [2024-07-12 13:32:16.408436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007a0a cdw11:00000000 00:06:28.033 [2024-07-12 13:32:16.408446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.033 #39 NEW cov: 12078 ft: 14222 corp: 31/107b lim: 10 exec/s: 39 rss: 69Mb L: 4/10 MS: 1 EraseBytes- 00:06:28.033 [2024-07-12 13:32:16.448346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:28.033 [2024-07-12 13:32:16.448369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.033 #40 NEW cov: 12078 ft: 14231 corp: 32/110b lim: 10 exec/s: 40 rss: 69Mb L: 3/10 MS: 1 CopyPart- 00:06:28.033 [2024-07-12 13:32:16.488568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:28.033 [2024-07-12 13:32:16.488591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.034 [2024-07-12 13:32:16.488632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008a7a cdw11:00000000 00:06:28.034 [2024-07-12 13:32:16.488643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.034 #41 NEW cov: 12078 ft: 14238 corp: 33/115b lim: 10 exec/s: 41 rss: 69Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:28.034 [2024-07-12 13:32:16.548738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7a cdw11:00000000 00:06:28.034 [2024-07-12 13:32:16.548762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.034 [2024-07-12 13:32:16.548805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000280a cdw11:00000000 00:06:28.034 [2024-07-12 13:32:16.548816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.034 #42 NEW cov: 12078 ft: 14239 corp: 34/119b lim: 10 exec/s: 42 rss: 69Mb L: 4/10 MS: 1 InsertByte- 00:06:28.034 [2024-07-12 13:32:16.588761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:06:28.034 [2024-07-12 13:32:16.588784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.293 #43 NEW cov: 12078 ft: 14244 corp: 35/121b lim: 10 exec/s: 43 rss: 69Mb L: 2/10 MS: 1 PersAutoDict- DE: "\003\000"- 00:06:28.293 [2024-07-12 13:32:16.648993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:28.293 [2024-07-12 13:32:16.649017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.293 [2024-07-12 13:32:16.649057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007a0a cdw11:00000000 00:06:28.293 [2024-07-12 13:32:16.649067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.293 #44 NEW cov: 12078 ft: 14277 corp: 36/125b lim: 10 exec/s: 44 rss: 69Mb L: 4/10 MS: 1 ChangeByte- 00:06:28.293 [2024-07-12 13:32:16.709064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a0a cdw11:00000000 00:06:28.293 [2024-07-12 13:32:16.709089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.293 #45 NEW cov: 12078 ft: 14282 corp: 37/127b lim: 10 exec/s: 45 rss: 69Mb L: 2/10 MS: 1 ChangeBit- 00:06:28.293 [2024-07-12 13:32:16.749283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008e3 cdw11:00000000 00:06:28.293 [2024-07-12 13:32:16.749307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.293 [2024-07-12 13:32:16.749348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e3e3 cdw11:00000000 00:06:28.293 [2024-07-12 13:32:16.749358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.293 #46 NEW cov: 12078 ft: 14306 corp: 38/131b lim: 10 exec/s: 46 rss: 69Mb L: 4/10 MS: 1 ChangeBit- 00:06:28.293 [2024-07-12 13:32:16.789386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a03 cdw11:00000000 00:06:28.293 [2024-07-12 13:32:16.789410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.293 [2024-07-12 13:32:16.789450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002a cdw11:00000000 00:06:28.293 [2024-07-12 13:32:16.789460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.293 #47 NEW cov: 12078 ft: 14320 corp: 39/135b lim: 10 exec/s: 23 rss: 70Mb L: 4/10 MS: 1 PersAutoDict- DE: "\003\000"- 00:06:28.293 #47 DONE cov: 12078 ft: 14320 corp: 39/135b lim: 10 exec/s: 23 rss: 70Mb 00:06:28.293 ###### Recommended dictionary. ###### 00:06:28.293 "b\014j\011\371\032'\000" # Uses: 1 00:06:28.293 "\003\000" # Uses: 2 00:06:28.293 ###### End of recommended dictionary. ###### 00:06:28.293 Done 47 runs in 2 second(s) 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:28.553 13:32:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:28.553 [2024-07-12 13:32:16.966454] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:28.553 [2024-07-12 13:32:16.966527] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440816 ] 00:06:28.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.813 [2024-07-12 13:32:17.139080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.813 [2024-07-12 13:32:17.191273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.813 [2024-07-12 13:32:17.252669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.813 [2024-07-12 13:32:17.269014] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:28.813 INFO: Running with entropic power schedule (0xFF, 100). 00:06:28.813 INFO: Seed: 3038753526 00:06:28.813 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:28.813 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:28.813 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:28.813 INFO: A corpus is not provided, starting from an empty corpus 00:06:28.813 #2 INITED exec/s: 0 rss: 64Mb 00:06:28.813 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:28.813 This may also happen if the target rejected all inputs we tried so far 00:06:28.813 [2024-07-12 13:32:17.335870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004040 cdw11:00000000 00:06:28.813 [2024-07-12 13:32:17.335907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.073 NEW_FUNC[1/693]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:29.073 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.073 #4 NEW cov: 11833 ft: 11830 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 2 ChangeByte-CopyPart- 00:06:29.073 [2024-07-12 13:32:17.526476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000040a5 cdw11:00000000 00:06:29.073 [2024-07-12 13:32:17.526524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.073 NEW_FUNC[1/1]: 0xf46d40 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:291 00:06:29.073 #6 NEW cov: 11964 ft: 12293 corp: 3/5b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 2 EraseBytes-InsertByte- 00:06:29.073 [2024-07-12 13:32:17.606601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004085 cdw11:00000000 00:06:29.073 [2024-07-12 13:32:17.606632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.073 #7 NEW cov: 11970 ft: 12544 corp: 4/7b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBit- 00:06:29.333 [2024-07-12 13:32:17.676739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000282b cdw11:00000000 00:06:29.333 [2024-07-12 13:32:17.676768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.333 #12 NEW cov: 12055 ft: 12800 corp: 5/9b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 5 CrossOver-ChangeBinInt-ShuffleBytes-ChangeByte-InsertByte- 00:06:29.333 [2024-07-12 13:32:17.736963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000cff cdw11:00000000 00:06:29.333 [2024-07-12 13:32:17.736992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.333 #14 NEW cov: 12055 ft: 13076 corp: 6/11b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 2 ChangeByte-InsertByte- 00:06:29.333 [2024-07-12 13:32:17.797109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000040a5 cdw11:00000000 00:06:29.333 [2024-07-12 13:32:17.797137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.333 #15 NEW cov: 12055 ft: 13132 corp: 7/13b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:29.333 [2024-07-12 13:32:17.867352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c640 cdw11:00000000 00:06:29.333 [2024-07-12 13:32:17.867379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.333 #16 NEW cov: 12055 ft: 13168 corp: 8/16b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 InsertByte- 00:06:29.593 [2024-07-12 13:32:17.937871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004040 cdw11:00000000 00:06:29.593 [2024-07-12 13:32:17.937898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 [2024-07-12 13:32:17.937996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008585 cdw11:00000000 00:06:29.593 [2024-07-12 13:32:17.938012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.593 #17 NEW cov: 12055 ft: 13381 corp: 9/20b lim: 10 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:06:29.593 [2024-07-12 13:32:17.997952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000408d cdw11:00000000 00:06:29.593 [2024-07-12 13:32:17.997980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 #18 NEW cov: 12055 ft: 13459 corp: 10/22b lim: 10 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:06:29.593 [2024-07-12 13:32:18.058427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000030ff cdw11:00000000 00:06:29.593 [2024-07-12 13:32:18.058454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 #19 NEW cov: 12055 ft: 13512 corp: 11/24b lim: 10 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeByte- 00:06:29.593 [2024-07-12 13:32:18.128676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000031a5 cdw11:00000000 00:06:29.593 [2024-07-12 13:32:18.128705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.593 #20 NEW cov: 12055 ft: 13522 corp: 12/26b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:06:29.853 [2024-07-12 13:32:18.188870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003240 cdw11:00000000 00:06:29.853 [2024-07-12 13:32:18.188897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.853 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:29.853 #24 NEW cov: 12078 ft: 13553 corp: 13/29b lim: 10 exec/s: 0 rss: 72Mb L: 3/4 MS: 4 EraseBytes-ChangeASCIIInt-ShuffleBytes-CrossOver- 00:06:29.853 [2024-07-12 13:32:18.259020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000cff cdw11:00000000 00:06:29.853 [2024-07-12 13:32:18.259050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.853 #25 NEW cov: 12078 ft: 13562 corp: 14/31b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:29.853 [2024-07-12 13:32:18.319246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000040a5 cdw11:00000000 00:06:29.853 [2024-07-12 13:32:18.319276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.853 #26 NEW cov: 12078 ft: 13586 corp: 15/33b lim: 10 exec/s: 26 rss: 72Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:29.853 [2024-07-12 13:32:18.379433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004010 cdw11:00000000 00:06:29.853 [2024-07-12 13:32:18.379462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.853 #27 NEW cov: 12078 ft: 13605 corp: 16/35b lim: 10 exec/s: 27 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:06:30.114 [2024-07-12 13:32:18.449690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004010 cdw11:00000000 00:06:30.114 [2024-07-12 13:32:18.449718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.114 #28 NEW cov: 12078 ft: 13619 corp: 17/37b lim: 10 exec/s: 28 rss: 72Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:30.114 [2024-07-12 13:32:18.519872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000408a cdw11:00000000 00:06:30.114 [2024-07-12 13:32:18.519899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.114 #29 NEW cov: 12078 ft: 13631 corp: 18/39b lim: 10 exec/s: 29 rss: 72Mb L: 2/4 MS: 1 ChangeBinInt- 00:06:30.114 [2024-07-12 13:32:18.580051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000030ff cdw11:00000000 00:06:30.114 [2024-07-12 13:32:18.580077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.114 #30 NEW cov: 12078 ft: 13645 corp: 19/41b lim: 10 exec/s: 30 rss: 72Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:30.114 [2024-07-12 13:32:18.650572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004040 cdw11:00000000 00:06:30.114 [2024-07-12 13:32:18.650599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.114 [2024-07-12 13:32:18.650702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004040 cdw11:00000000 00:06:30.114 [2024-07-12 13:32:18.650718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.114 #31 NEW cov: 12078 ft: 13657 corp: 20/45b lim: 10 exec/s: 31 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:06:30.375 [2024-07-12 13:32:18.720462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004085 cdw11:00000000 00:06:30.375 [2024-07-12 13:32:18.720491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 #32 NEW cov: 12078 ft: 13669 corp: 21/48b lim: 10 exec/s: 32 rss: 72Mb L: 3/4 MS: 1 EraseBytes- 00:06:30.375 [2024-07-12 13:32:18.780888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000323d cdw11:00000000 00:06:30.375 [2024-07-12 13:32:18.780916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 [2024-07-12 13:32:18.781018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000040a5 cdw11:00000000 00:06:30.375 [2024-07-12 13:32:18.781035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.375 #33 NEW cov: 12078 ft: 13684 corp: 22/52b lim: 10 exec/s: 33 rss: 72Mb L: 4/4 MS: 1 InsertByte- 00:06:30.375 [2024-07-12 13:32:18.850900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003100 cdw11:00000000 00:06:30.375 [2024-07-12 13:32:18.850925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 #34 NEW cov: 12078 ft: 13697 corp: 23/54b lim: 10 exec/s: 34 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:06:30.375 [2024-07-12 13:32:18.911601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.375 [2024-07-12 13:32:18.911627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 [2024-07-12 13:32:18.911734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.375 [2024-07-12 13:32:18.911749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.375 [2024-07-12 13:32:18.911854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff30 cdw11:00000000 00:06:30.375 [2024-07-12 13:32:18.911868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.375 #35 NEW cov: 12078 ft: 13954 corp: 24/61b lim: 10 exec/s: 35 rss: 72Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:30.636 [2024-07-12 13:32:18.971576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000cff cdw11:00000000 00:06:30.636 [2024-07-12 13:32:18.971602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.636 [2024-07-12 13:32:18.971713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c640 cdw11:00000000 00:06:30.636 [2024-07-12 13:32:18.971728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.636 #36 NEW cov: 12078 ft: 13972 corp: 25/66b lim: 10 exec/s: 36 rss: 72Mb L: 5/7 MS: 1 CrossOver- 00:06:30.636 [2024-07-12 13:32:19.041492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008585 cdw11:00000000 00:06:30.636 [2024-07-12 13:32:19.041520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.636 #37 NEW cov: 12078 ft: 13993 corp: 26/68b lim: 10 exec/s: 37 rss: 72Mb L: 2/7 MS: 1 CopyPart- 00:06:30.636 [2024-07-12 13:32:19.101787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008d40 cdw11:00000000 00:06:30.636 [2024-07-12 13:32:19.101814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.636 #38 NEW cov: 12078 ft: 14060 corp: 27/70b lim: 10 exec/s: 38 rss: 72Mb L: 2/7 MS: 1 ShuffleBytes- 00:06:30.636 [2024-07-12 13:32:19.172015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000030ff cdw11:00000000 00:06:30.636 [2024-07-12 13:32:19.172041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.636 #39 NEW cov: 12078 ft: 14066 corp: 28/72b lim: 10 exec/s: 39 rss: 72Mb L: 2/7 MS: 1 CopyPart- 00:06:30.896 [2024-07-12 13:32:19.232204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004025 cdw11:00000000 00:06:30.896 [2024-07-12 13:32:19.232236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.896 #40 NEW cov: 12078 ft: 14117 corp: 29/74b lim: 10 exec/s: 40 rss: 72Mb L: 2/7 MS: 1 ChangeBit- 00:06:30.896 [2024-07-12 13:32:19.292434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004085 cdw11:00000000 00:06:30.896 [2024-07-12 13:32:19.292461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.896 #41 NEW cov: 12078 ft: 14128 corp: 30/76b lim: 10 exec/s: 20 rss: 72Mb L: 2/7 MS: 1 CrossOver- 00:06:30.896 #41 DONE cov: 12078 ft: 14128 corp: 30/76b lim: 10 exec/s: 20 rss: 72Mb 00:06:30.896 Done 41 runs in 2 second(s) 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:30.896 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:30.897 13:32:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:30.897 [2024-07-12 13:32:19.471035] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:30.897 [2024-07-12 13:32:19.471125] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2441480 ] 00:06:31.156 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.156 [2024-07-12 13:32:19.618951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.156 [2024-07-12 13:32:19.671209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.156 [2024-07-12 13:32:19.732515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.416 [2024-07-12 13:32:19.748816] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:31.416 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.416 INFO: Seed: 1224782065 00:06:31.416 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:31.416 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:31.416 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:31.416 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.416 [2024-07-12 13:32:19.816022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.416 [2024-07-12 13:32:19.816057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.416 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:31.416 [2024-07-12 13:32:19.876151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.416 [2024-07-12 13:32:19.876182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.416 #3 NEW cov: 11992 ft: 12409 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeByte- 00:06:31.416 [2024-07-12 13:32:19.947796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.416 [2024-07-12 13:32:19.947824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.416 [2024-07-12 13:32:19.947942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.416 [2024-07-12 13:32:19.947957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.417 [2024-07-12 13:32:19.948080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.417 [2024-07-12 13:32:19.948096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.417 [2024-07-12 13:32:19.948215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.417 [2024-07-12 13:32:19.948234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.417 #4 NEW cov: 11998 ft: 13451 corp: 3/6b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:31.677 [2024-07-12 13:32:20.007361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.007390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.677 [2024-07-12 13:32:20.007504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.007518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.677 #5 NEW cov: 12083 ft: 13999 corp: 4/8b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 InsertByte- 00:06:31.677 [2024-07-12 13:32:20.067565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.067596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.677 [2024-07-12 13:32:20.067718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.067736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.677 #6 NEW cov: 12083 ft: 14053 corp: 5/10b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 InsertByte- 00:06:31.677 [2024-07-12 13:32:20.127461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.127495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.677 #7 NEW cov: 12083 ft: 14231 corp: 6/11b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:06:31.677 [2024-07-12 13:32:20.199398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.199428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.677 [2024-07-12 13:32:20.199553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.199569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.677 [2024-07-12 13:32:20.199686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.199701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.677 [2024-07-12 13:32:20.199810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.199824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.677 [2024-07-12 13:32:20.199939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.199955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.677 #8 NEW cov: 12083 ft: 14381 corp: 7/16b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:31.677 [2024-07-12 13:32:20.258276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.677 [2024-07-12 13:32:20.258305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.938 #9 NEW cov: 12083 ft: 14403 corp: 8/17b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:06:31.938 [2024-07-12 13:32:20.318738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.938 [2024-07-12 13:32:20.318766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.938 #10 NEW cov: 12083 ft: 14428 corp: 9/18b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:06:31.938 [2024-07-12 13:32:20.390760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.938 [2024-07-12 13:32:20.390788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.938 [2024-07-12 13:32:20.390905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.938 [2024-07-12 13:32:20.390925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.938 [2024-07-12 13:32:20.391048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.938 [2024-07-12 13:32:20.391065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.938 [2024-07-12 13:32:20.391182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.938 [2024-07-12 13:32:20.391200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.938 [2024-07-12 13:32:20.391310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.938 [2024-07-12 13:32:20.391327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.938 #11 NEW cov: 12083 ft: 14465 corp: 10/23b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:31.938 [2024-07-12 13:32:20.469706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.938 [2024-07-12 13:32:20.469736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.938 #12 NEW cov: 12083 ft: 14502 corp: 11/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:32.198 [2024-07-12 13:32:20.531755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.198 [2024-07-12 13:32:20.531783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.198 [2024-07-12 13:32:20.531900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.198 [2024-07-12 13:32:20.531915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.198 [2024-07-12 13:32:20.532040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.198 [2024-07-12 13:32:20.532055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.198 [2024-07-12 13:32:20.532171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.198 [2024-07-12 13:32:20.532186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.198 [2024-07-12 13:32:20.532300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.198 [2024-07-12 13:32:20.532315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.199 #13 NEW cov: 12083 ft: 14538 corp: 12/29b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:06:32.199 [2024-07-12 13:32:20.610747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.199 [2024-07-12 13:32:20.610778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.199 #14 NEW cov: 12083 ft: 14565 corp: 13/30b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:32.199 [2024-07-12 13:32:20.681548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.199 [2024-07-12 13:32:20.681579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.199 [2024-07-12 13:32:20.681696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.199 [2024-07-12 13:32:20.681713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.459 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:32.459 #15 NEW cov: 12106 ft: 14600 corp: 14/32b lim: 5 exec/s: 15 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:32.459 [2024-07-12 13:32:20.873643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.873688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.873811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.873828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.873946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.873965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.874079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.874097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.874220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.874242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.459 #16 NEW cov: 12106 ft: 14707 corp: 15/37b lim: 5 exec/s: 16 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:06:32.459 [2024-07-12 13:32:20.953799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.953829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.953950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.953964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.954085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.954101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.954224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.954247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:20.954368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:20.954384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.459 #17 NEW cov: 12106 ft: 14738 corp: 16/42b lim: 5 exec/s: 17 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:06:32.459 [2024-07-12 13:32:21.033834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:21.033862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:21.033976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:21.033992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:21.034108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:21.034123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.459 [2024-07-12 13:32:21.034254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.459 [2024-07-12 13:32:21.034270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.720 #18 NEW cov: 12106 ft: 14752 corp: 17/46b lim: 5 exec/s: 18 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:32.720 [2024-07-12 13:32:21.093919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.093948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.720 [2024-07-12 13:32:21.094062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.094078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.720 [2024-07-12 13:32:21.094197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.094213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.720 #19 NEW cov: 12106 ft: 15012 corp: 18/49b lim: 5 exec/s: 19 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:32.720 [2024-07-12 13:32:21.154960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.154988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.720 [2024-07-12 13:32:21.155112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.155130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.720 [2024-07-12 13:32:21.155244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.155262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.720 [2024-07-12 13:32:21.155383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.155400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.720 [2024-07-12 13:32:21.155528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.155545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.720 #20 NEW cov: 12106 ft: 15025 corp: 19/54b lim: 5 exec/s: 20 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:32.720 [2024-07-12 13:32:21.213772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.213801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.720 #21 NEW cov: 12106 ft: 15033 corp: 20/55b lim: 5 exec/s: 21 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:06:32.720 [2024-07-12 13:32:21.274134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.720 [2024-07-12 13:32:21.274162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.720 #22 NEW cov: 12106 ft: 15046 corp: 21/56b lim: 5 exec/s: 22 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:32.982 [2024-07-12 13:32:21.335753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.335780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.982 [2024-07-12 13:32:21.335899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.335916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.982 [2024-07-12 13:32:21.336046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.336065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.982 [2024-07-12 13:32:21.336185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.336201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.982 #23 NEW cov: 12106 ft: 15106 corp: 22/60b lim: 5 exec/s: 23 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:32.982 [2024-07-12 13:32:21.415415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.415442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.982 [2024-07-12 13:32:21.415565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.415582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.982 #24 NEW cov: 12106 ft: 15161 corp: 23/62b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:32.982 [2024-07-12 13:32:21.496573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.496600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.982 [2024-07-12 13:32:21.496717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.496733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.982 [2024-07-12 13:32:21.496856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.496872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.982 [2024-07-12 13:32:21.496986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.982 [2024-07-12 13:32:21.497002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.982 #25 NEW cov: 12106 ft: 15227 corp: 24/66b lim: 5 exec/s: 25 rss: 73Mb L: 4/5 MS: 1 CrossOver- 00:06:33.242 [2024-07-12 13:32:21.577367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.242 [2024-07-12 13:32:21.577394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.242 [2024-07-12 13:32:21.577516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.242 [2024-07-12 13:32:21.577530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.242 [2024-07-12 13:32:21.577649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.242 [2024-07-12 13:32:21.577665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.242 [2024-07-12 13:32:21.577777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.243 [2024-07-12 13:32:21.577793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.243 [2024-07-12 13:32:21.577905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.243 [2024-07-12 13:32:21.577920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.243 #26 NEW cov: 12106 ft: 15241 corp: 25/71b lim: 5 exec/s: 26 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:33.243 [2024-07-12 13:32:21.656567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.243 [2024-07-12 13:32:21.656595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.243 [2024-07-12 13:32:21.656721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.243 [2024-07-12 13:32:21.656737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.243 #27 NEW cov: 12106 ft: 15272 corp: 26/73b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:33.243 [2024-07-12 13:32:21.716777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.243 [2024-07-12 13:32:21.716805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.243 [2024-07-12 13:32:21.716921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.243 [2024-07-12 13:32:21.716937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.243 #28 NEW cov: 12106 ft: 15340 corp: 27/75b lim: 5 exec/s: 28 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:33.243 [2024-07-12 13:32:21.796929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.243 [2024-07-12 13:32:21.796958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.503 #29 NEW cov: 12106 ft: 15350 corp: 28/76b lim: 5 exec/s: 14 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:06:33.503 #29 DONE cov: 12106 ft: 15350 corp: 28/76b lim: 5 exec/s: 14 rss: 74Mb 00:06:33.503 Done 29 runs in 2 second(s) 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.503 13:32:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:33.503 [2024-07-12 13:32:21.976785] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:33.503 [2024-07-12 13:32:21.976884] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2441839 ] 00:06:33.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.763 [2024-07-12 13:32:22.137935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.763 [2024-07-12 13:32:22.190414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.763 [2024-07-12 13:32:22.251949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.763 [2024-07-12 13:32:22.268259] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:33.763 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.763 INFO: Seed: 3741790051 00:06:33.763 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:33.763 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:33.763 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:33.763 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.763 [2024-07-12 13:32:22.328353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.763 [2024-07-12 13:32:22.328388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.023 #2 INITED cov: 11856 ft: 11857 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:34.023 [2024-07-12 13:32:22.388540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.023 [2024-07-12 13:32:22.388573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.023 #3 NEW cov: 11992 ft: 12372 corp: 2/2b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ChangeBit- 00:06:34.023 [2024-07-12 13:32:22.458798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.023 [2024-07-12 13:32:22.458828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.023 #4 NEW cov: 11998 ft: 12753 corp: 3/3b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ChangeByte- 00:06:34.023 [2024-07-12 13:32:22.519004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.023 [2024-07-12 13:32:22.519035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.023 #5 NEW cov: 12083 ft: 12985 corp: 4/4b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ChangeBit- 00:06:34.023 [2024-07-12 13:32:22.579143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.023 [2024-07-12 13:32:22.579172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.284 #6 NEW cov: 12083 ft: 13019 corp: 5/5b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:34.284 [2024-07-12 13:32:22.649422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.649450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.284 #7 NEW cov: 12083 ft: 13063 corp: 6/6b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ChangeBit- 00:06:34.284 [2024-07-12 13:32:22.711114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.711143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.284 [2024-07-12 13:32:22.711265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.711284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.284 [2024-07-12 13:32:22.711409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.711424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.284 [2024-07-12 13:32:22.711535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.711551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.284 [2024-07-12 13:32:22.711663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.711680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.284 #8 NEW cov: 12083 ft: 13942 corp: 7/11b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:34.284 [2024-07-12 13:32:22.769880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.769907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.284 #9 NEW cov: 12083 ft: 14054 corp: 8/12b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 ChangeByte- 00:06:34.284 [2024-07-12 13:32:22.830177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.284 [2024-07-12 13:32:22.830206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.545 #10 NEW cov: 12083 ft: 14070 corp: 9/13b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 CopyPart- 00:06:34.545 [2024-07-12 13:32:22.900368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.545 [2024-07-12 13:32:22.900395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.545 #11 NEW cov: 12083 ft: 14158 corp: 10/14b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 ChangeBit- 00:06:34.545 [2024-07-12 13:32:22.970528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.545 [2024-07-12 13:32:22.970557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.545 #12 NEW cov: 12083 ft: 14174 corp: 11/15b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 ChangeBit- 00:06:34.545 [2024-07-12 13:32:23.040863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.545 [2024-07-12 13:32:23.040892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.545 #13 NEW cov: 12083 ft: 14184 corp: 12/16b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:34.545 [2024-07-12 13:32:23.111377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.545 [2024-07-12 13:32:23.111404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.545 [2024-07-12 13:32:23.111517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.545 [2024-07-12 13:32:23.111537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.806 #14 NEW cov: 12083 ft: 14401 corp: 13/18b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:06:34.806 [2024-07-12 13:32:23.172399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.806 [2024-07-12 13:32:23.172427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.806 [2024-07-12 13:32:23.172540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.806 [2024-07-12 13:32:23.172556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.806 [2024-07-12 13:32:23.172671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.806 [2024-07-12 13:32:23.172684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.806 [2024-07-12 13:32:23.172795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.806 [2024-07-12 13:32:23.172810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.806 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:34.806 #15 NEW cov: 12106 ft: 14444 corp: 14/22b lim: 5 exec/s: 15 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:34.806 [2024-07-12 13:32:23.362026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.806 [2024-07-12 13:32:23.362071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.067 #16 NEW cov: 12106 ft: 14575 corp: 15/23b lim: 5 exec/s: 16 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:35.067 [2024-07-12 13:32:23.442413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.067 [2024-07-12 13:32:23.442441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.067 [2024-07-12 13:32:23.442562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.067 [2024-07-12 13:32:23.442577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.067 #17 NEW cov: 12106 ft: 14588 corp: 16/25b lim: 5 exec/s: 17 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:35.067 [2024-07-12 13:32:23.502272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.067 [2024-07-12 13:32:23.502300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.067 #18 NEW cov: 12106 ft: 14595 corp: 17/26b lim: 5 exec/s: 18 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:35.067 [2024-07-12 13:32:23.562432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.067 [2024-07-12 13:32:23.562459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.067 #19 NEW cov: 12106 ft: 14605 corp: 18/27b lim: 5 exec/s: 19 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:35.067 [2024-07-12 13:32:23.632763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.067 [2024-07-12 13:32:23.632791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.328 #20 NEW cov: 12106 ft: 14625 corp: 19/28b lim: 5 exec/s: 20 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:35.328 [2024-07-12 13:32:23.693657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.328 [2024-07-12 13:32:23.693687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.328 [2024-07-12 13:32:23.693800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.328 [2024-07-12 13:32:23.693814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.328 [2024-07-12 13:32:23.693928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.328 [2024-07-12 13:32:23.693942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.328 #21 NEW cov: 12106 ft: 14795 corp: 20/31b lim: 5 exec/s: 21 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:06:35.328 [2024-07-12 13:32:23.773513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.328 [2024-07-12 13:32:23.773542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.329 [2024-07-12 13:32:23.773660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.329 [2024-07-12 13:32:23.773676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.329 #22 NEW cov: 12106 ft: 14808 corp: 21/33b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:35.329 [2024-07-12 13:32:23.834776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.329 [2024-07-12 13:32:23.834804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.329 [2024-07-12 13:32:23.834916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.329 [2024-07-12 13:32:23.834931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.329 [2024-07-12 13:32:23.835040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.329 [2024-07-12 13:32:23.835055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.329 [2024-07-12 13:32:23.835170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.329 [2024-07-12 13:32:23.835185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.329 [2024-07-12 13:32:23.835298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.329 [2024-07-12 13:32:23.835317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.329 #23 NEW cov: 12106 ft: 14815 corp: 22/38b lim: 5 exec/s: 23 rss: 72Mb L: 5/5 MS: 1 CMP- DE: "+?"- 00:06:35.589 [2024-07-12 13:32:23.914339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:23.914367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.589 [2024-07-12 13:32:23.914488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:23.914503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.589 [2024-07-12 13:32:23.914621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:23.914636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.589 #24 NEW cov: 12106 ft: 14835 corp: 23/41b lim: 5 exec/s: 24 rss: 72Mb L: 3/5 MS: 1 PersAutoDict- DE: "+?"- 00:06:35.589 [2024-07-12 13:32:23.994154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:23.994182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.589 [2024-07-12 13:32:23.994300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:23.994315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.589 #25 NEW cov: 12106 ft: 14846 corp: 24/43b lim: 5 exec/s: 25 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:35.589 [2024-07-12 13:32:24.074858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:24.074888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.589 [2024-07-12 13:32:24.075003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:24.075020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.589 [2024-07-12 13:32:24.075127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:24.075140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.589 #26 NEW cov: 12106 ft: 14906 corp: 25/46b lim: 5 exec/s: 26 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:35.589 [2024-07-12 13:32:24.154702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:24.154729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.589 [2024-07-12 13:32:24.154837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.589 [2024-07-12 13:32:24.154852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.850 #27 NEW cov: 12106 ft: 14918 corp: 26/48b lim: 5 exec/s: 27 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:35.850 [2024-07-12 13:32:24.224585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.850 [2024-07-12 13:32:24.224614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.850 #28 NEW cov: 12106 ft: 14935 corp: 27/49b lim: 5 exec/s: 28 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:35.850 [2024-07-12 13:32:24.284764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.850 [2024-07-12 13:32:24.284791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.850 #29 NEW cov: 12106 ft: 14936 corp: 28/50b lim: 5 exec/s: 14 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:35.850 #29 DONE cov: 12106 ft: 14936 corp: 28/50b lim: 5 exec/s: 14 rss: 72Mb 00:06:35.850 ###### Recommended dictionary. ###### 00:06:35.850 "+?" # Uses: 1 00:06:35.850 ###### End of recommended dictionary. ###### 00:06:35.850 Done 29 runs in 2 second(s) 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:35.851 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.112 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.112 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.112 13:32:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:36.112 [2024-07-12 13:32:24.463666] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:36.112 [2024-07-12 13:32:24.463759] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442518 ] 00:06:36.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.112 [2024-07-12 13:32:24.618088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.112 [2024-07-12 13:32:24.673610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.373 [2024-07-12 13:32:24.735164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.373 [2024-07-12 13:32:24.751466] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:36.373 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.373 INFO: Seed: 1930818015 00:06:36.373 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:36.373 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:36.373 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:36.373 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.373 #2 INITED exec/s: 0 rss: 65Mb 00:06:36.373 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.373 This may also happen if the target rejected all inputs we tried so far 00:06:36.373 [2024-07-12 13:32:24.800560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e1704c cdw11:fe1a2700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.373 [2024-07-12 13:32:24.800598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.633 NEW_FUNC[1/695]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:36.633 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:36.633 #5 NEW cov: 11885 ft: 11886 corp: 2/10b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 3 CopyPart-ChangeByte-CMP- DE: "y\341pL\376\032'\000"- 00:06:36.633 [2024-07-12 13:32:24.981329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.633 [2024-07-12 13:32:24.981382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.633 #8 NEW cov: 12015 ft: 12474 corp: 3/23b lim: 40 exec/s: 0 rss: 72Mb L: 13/13 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:06:36.633 [2024-07-12 13:32:25.031019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e1704c cdw11:fb1a2700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.633 [2024-07-12 13:32:25.031044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.633 #14 NEW cov: 12021 ft: 12633 corp: 4/32b lim: 40 exec/s: 0 rss: 72Mb L: 9/13 MS: 1 ChangeBinInt- 00:06:36.633 [2024-07-12 13:32:25.091164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e1704cfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.633 [2024-07-12 13:32:25.091191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.633 #18 NEW cov: 12106 ft: 12894 corp: 5/44b lim: 40 exec/s: 0 rss: 72Mb L: 12/13 MS: 4 CopyPart-InsertByte-InsertByte-CrossOver- 00:06:36.633 [2024-07-12 13:32:25.131276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e14c4c cdw11:fe1a2700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.633 [2024-07-12 13:32:25.131300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.633 #19 NEW cov: 12106 ft: 13029 corp: 6/53b lim: 40 exec/s: 0 rss: 72Mb L: 9/13 MS: 1 CrossOver- 00:06:36.633 [2024-07-12 13:32:25.181424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:7979e170 cdw11:4cfe1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.633 [2024-07-12 13:32:25.181450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.633 #20 NEW cov: 12106 ft: 13147 corp: 7/62b lim: 40 exec/s: 0 rss: 72Mb L: 9/13 MS: 1 PersAutoDict- DE: "y\341pL\376\032'\000"- 00:06:36.897 [2024-07-12 13:32:25.221860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e1704c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.221890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.897 [2024-07-12 13:32:25.221939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.221949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.897 [2024-07-12 13:32:25.221996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.222008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.897 [2024-07-12 13:32:25.222059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fffffffb cdw11:1a27003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.222069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.897 #21 NEW cov: 12106 ft: 13950 corp: 8/94b lim: 40 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:36.897 [2024-07-12 13:32:25.281660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e1704c cdw11:fe1a2700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.281686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.897 #22 NEW cov: 12106 ft: 14032 corp: 9/103b lim: 40 exec/s: 0 rss: 72Mb L: 9/32 MS: 1 ChangeBit- 00:06:36.897 [2024-07-12 13:32:25.321741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4cfb1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.321766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.897 #23 NEW cov: 12106 ft: 14093 corp: 10/113b lim: 40 exec/s: 0 rss: 72Mb L: 10/32 MS: 1 CrossOver- 00:06:36.897 [2024-07-12 13:32:25.371881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:704c1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.371905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.897 #24 NEW cov: 12106 ft: 14157 corp: 11/123b lim: 40 exec/s: 0 rss: 72Mb L: 10/32 MS: 1 CrossOver- 00:06:36.897 [2024-07-12 13:32:25.432056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e1704cfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.897 [2024-07-12 13:32:25.432080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.897 #25 NEW cov: 12106 ft: 14183 corp: 12/138b lim: 40 exec/s: 0 rss: 72Mb L: 15/32 MS: 1 InsertRepeatedBytes- 00:06:37.189 [2024-07-12 13:32:25.492213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4c40fb1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.189 [2024-07-12 13:32:25.492243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.189 #26 NEW cov: 12106 ft: 14199 corp: 13/149b lim: 40 exec/s: 0 rss: 72Mb L: 11/32 MS: 1 InsertByte- 00:06:37.190 [2024-07-12 13:32:25.532437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e179e170 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.532462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.532511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4cfe1a27 cdw11:00704cfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.532525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.190 #27 NEW cov: 12106 ft: 14450 corp: 14/169b lim: 40 exec/s: 0 rss: 72Mb L: 20/32 MS: 1 PersAutoDict- DE: "y\341pL\376\032'\000"- 00:06:37.190 [2024-07-12 13:32:25.582649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e179e170 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.582674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.582723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4c000000 cdw11:04fe1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.582733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.582782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00704cfb cdw11:1a27003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.582792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.190 #28 NEW cov: 12106 ft: 14669 corp: 15/193b lim: 40 exec/s: 0 rss: 73Mb L: 24/32 MS: 1 CMP- DE: "\000\000\000\004"- 00:06:37.190 [2024-07-12 13:32:25.642929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:62171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.642954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.643005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.643015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.643064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:1717170a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.643075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.643123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e179e170 cdw11:4cfb1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.643134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.190 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:37.190 #29 NEW cov: 12129 ft: 14697 corp: 16/230b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:37.190 [2024-07-12 13:32:25.703002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e179e170 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.703030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.703080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:4cfe1a27 cdw11:00704cfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.703091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.190 [2024-07-12 13:32:25.703139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:1a27003b cdw11:2700704c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.703153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.190 #35 NEW cov: 12129 ft: 14704 corp: 17/256b lim: 40 exec/s: 0 rss: 73Mb L: 26/37 MS: 1 CopyPart- 00:06:37.190 [2024-07-12 13:32:25.752886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:fb1a2700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.190 [2024-07-12 13:32:25.752913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 #36 NEW cov: 12129 ft: 14722 corp: 18/265b lim: 40 exec/s: 0 rss: 73Mb L: 9/37 MS: 1 EraseBytes- 00:06:37.485 [2024-07-12 13:32:25.793001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4c40fb1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:25.793027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 #37 NEW cov: 12129 ft: 14725 corp: 19/276b lim: 40 exec/s: 37 rss: 73Mb L: 11/37 MS: 1 ChangeByte- 00:06:37.485 [2024-07-12 13:32:25.853173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e14c4c cdw11:fe27003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:25.853199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 #38 NEW cov: 12129 ft: 14741 corp: 20/284b lim: 40 exec/s: 38 rss: 73Mb L: 8/37 MS: 1 EraseBytes- 00:06:37.485 [2024-07-12 13:32:25.913542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4cfb1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:25.913568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 [2024-07-12 13:32:25.913619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:003b3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:25.913630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.485 [2024-07-12 13:32:25.913682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:25.913692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.485 #39 NEW cov: 12129 ft: 14754 corp: 21/312b lim: 40 exec/s: 39 rss: 73Mb L: 28/37 MS: 1 InsertRepeatedBytes- 00:06:37.485 [2024-07-12 13:32:25.963504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e170704c cdw11:1a27003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:25.963530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 #40 NEW cov: 12129 ft: 14811 corp: 22/320b lim: 40 exec/s: 40 rss: 73Mb L: 8/37 MS: 1 EraseBytes- 00:06:37.485 [2024-07-12 13:32:26.023961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e17000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:26.023987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.485 [2024-07-12 13:32:26.024036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:26.024047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.485 [2024-07-12 13:32:26.024098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:26.024112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.485 [2024-07-12 13:32:26.024162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.485 [2024-07-12 13:32:26.024172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.772 #41 NEW cov: 12129 ft: 14835 corp: 23/359b lim: 40 exec/s: 41 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:37.772 [2024-07-12 13:32:26.074123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4cfb1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.074148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.772 [2024-07-12 13:32:26.074197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.074208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.772 [2024-07-12 13:32:26.074258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.074269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.772 [2024-07-12 13:32:26.074319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.074329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.772 #42 NEW cov: 12129 ft: 14883 corp: 24/393b lim: 40 exec/s: 42 rss: 73Mb L: 34/39 MS: 1 InsertRepeatedBytes- 00:06:37.772 [2024-07-12 13:32:26.123923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:797979e1 cdw11:704cfe79 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.123947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.772 #43 NEW cov: 12129 ft: 14885 corp: 25/408b lim: 40 exec/s: 43 rss: 73Mb L: 15/39 MS: 1 CopyPart- 00:06:37.772 [2024-07-12 13:32:26.184075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.184101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.772 #44 NEW cov: 12129 ft: 14891 corp: 26/420b lim: 40 exec/s: 44 rss: 73Mb L: 12/39 MS: 1 InsertRepeatedBytes- 00:06:37.772 [2024-07-12 13:32:26.224121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4cc004e5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.224145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.772 #45 NEW cov: 12129 ft: 14898 corp: 27/431b lim: 40 exec/s: 45 rss: 73Mb L: 11/39 MS: 1 ChangeBinInt- 00:06:37.772 [2024-07-12 13:32:26.284315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e1704cfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.284340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.772 #46 NEW cov: 12129 ft: 14918 corp: 28/446b lim: 40 exec/s: 46 rss: 73Mb L: 15/39 MS: 1 PersAutoDict- DE: "\000\000\000\004"- 00:06:37.772 [2024-07-12 13:32:26.334901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e1704c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.334929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.772 [2024-07-12 13:32:26.334980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.334991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.772 [2024-07-12 13:32:26.335042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.335052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.772 [2024-07-12 13:32:26.335099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.335110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.772 [2024-07-12 13:32:26.335157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:fffffffb cdw11:1a27003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.772 [2024-07-12 13:32:26.335168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:38.034 #47 NEW cov: 12129 ft: 15027 corp: 29/486b lim: 40 exec/s: 47 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:38.034 [2024-07-12 13:32:26.394961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4cfb1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.394985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.034 [2024-07-12 13:32:26.395035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.395046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.034 [2024-07-12 13:32:26.395095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.395105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.034 [2024-07-12 13:32:26.395157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.395166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.034 #48 NEW cov: 12129 ft: 15043 corp: 30/520b lim: 40 exec/s: 48 rss: 73Mb L: 34/40 MS: 1 ShuffleBytes- 00:06:38.034 [2024-07-12 13:32:26.454903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e1704cfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.454928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.034 [2024-07-12 13:32:26.454980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1a27003c cdw11:3c3c3c3c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.454991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.034 #49 NEW cov: 12129 ft: 15046 corp: 31/542b lim: 40 exec/s: 49 rss: 74Mb L: 22/40 MS: 1 InsertRepeatedBytes- 00:06:38.034 [2024-07-12 13:32:26.515274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4cfb1a27 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.515298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.034 [2024-07-12 13:32:26.515348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.515360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.034 [2024-07-12 13:32:26.515409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.515420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.034 [2024-07-12 13:32:26.515467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:21212121 cdw11:21212121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.515478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.034 #50 NEW cov: 12129 ft: 15052 corp: 32/577b lim: 40 exec/s: 50 rss: 74Mb L: 35/40 MS: 1 InsertByte- 00:06:38.034 [2024-07-12 13:32:26.565068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e1700f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.565092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.034 #51 NEW cov: 12129 ft: 15069 corp: 33/592b lim: 40 exec/s: 51 rss: 74Mb L: 15/40 MS: 1 ChangeBinInt- 00:06:38.034 [2024-07-12 13:32:26.615190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a79e170 cdw11:4cfb1a60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.034 [2024-07-12 13:32:26.615215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.296 #52 NEW cov: 12129 ft: 15087 corp: 34/603b lim: 40 exec/s: 52 rss: 74Mb L: 11/40 MS: 1 InsertByte- 00:06:38.296 [2024-07-12 13:32:26.655656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:79e1704c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.296 [2024-07-12 13:32:26.655681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.296 [2024-07-12 13:32:26.655729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:20ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.296 [2024-07-12 13:32:26.655740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.296 [2024-07-12 13:32:26.655786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.296 [2024-07-12 13:32:26.655797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.296 [2024-07-12 13:32:26.655845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fffffffb cdw11:1a27003b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.296 [2024-07-12 13:32:26.655855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.296 #53 NEW cov: 12129 ft: 15093 corp: 35/635b lim: 40 exec/s: 53 rss: 74Mb L: 32/40 MS: 1 ChangeBinInt- 00:06:38.296 [2024-07-12 13:32:26.705544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:620ae179 cdw11:e1704cfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.296 [2024-07-12 13:32:26.705572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.296 [2024-07-12 13:32:26.705624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1afb1a27 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.296 [2024-07-12 13:32:26.705635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.296 #54 NEW cov: 12129 ft: 15104 corp: 36/658b lim: 40 exec/s: 54 rss: 74Mb L: 23/40 MS: 1 CopyPart- 00:06:38.296 [2024-07-12 13:32:26.745524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e170704c cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.296 [2024-07-12 13:32:26.745548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.296 #55 NEW cov: 12129 ft: 15108 corp: 37/666b lim: 40 exec/s: 27 rss: 74Mb L: 8/40 MS: 1 PersAutoDict- DE: "\000\000\000\004"- 00:06:38.296 #55 DONE cov: 12129 ft: 15108 corp: 37/666b lim: 40 exec/s: 27 rss: 74Mb 00:06:38.296 ###### Recommended dictionary. ###### 00:06:38.296 "y\341pL\376\032'\000" # Uses: 2 00:06:38.296 "\000\000\000\004" # Uses: 2 00:06:38.296 ###### End of recommended dictionary. ###### 00:06:38.296 Done 55 runs in 2 second(s) 00:06:38.296 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.558 13:32:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:38.558 [2024-07-12 13:32:26.921847] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:38.558 [2024-07-12 13:32:26.921939] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442883 ] 00:06:38.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.558 [2024-07-12 13:32:27.078762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.558 [2024-07-12 13:32:27.134730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.819 [2024-07-12 13:32:27.196143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.819 [2024-07-12 13:32:27.212493] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:38.819 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.819 INFO: Seed: 95857126 00:06:38.819 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:38.819 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:38.819 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:38.819 INFO: A corpus is not provided, starting from an empty corpus 00:06:38.819 #2 INITED exec/s: 0 rss: 64Mb 00:06:38.819 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:38.819 This may also happen if the target rejected all inputs we tried so far 00:06:38.819 [2024-07-12 13:32:27.272216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.819 [2024-07-12 13:32:27.272261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.078 NEW_FUNC[1/695]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:39.078 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.078 #11 NEW cov: 11879 ft: 11898 corp: 2/14b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 4 ChangeByte-CMP-CrossOver-InsertRepeatedBytes- DE: "\000\000\000\000"- 00:06:39.078 [2024-07-12 13:32:27.462917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.078 [2024-07-12 13:32:27.462970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.078 NEW_FUNC[1/1]: 0x1de0150 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:942 00:06:39.078 #12 NEW cov: 12027 ft: 12610 corp: 3/27b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeByte- 00:06:39.078 [2024-07-12 13:32:27.542887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.078 [2024-07-12 13:32:27.542919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.078 #13 NEW cov: 12033 ft: 12805 corp: 4/40b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBinInt- 00:06:39.079 [2024-07-12 13:32:27.613228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4710000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.079 [2024-07-12 13:32:27.613263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.079 #14 NEW cov: 12118 ft: 13097 corp: 5/53b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:06:39.339 [2024-07-12 13:32:27.683502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.683530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.339 #15 NEW cov: 12118 ft: 13175 corp: 6/66b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ShuffleBytes- 00:06:39.339 [2024-07-12 13:32:27.744814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4710000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.744841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.339 [2024-07-12 13:32:27.744972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:8a848484 cdw11:2e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.744990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.339 [2024-07-12 13:32:27.745106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.745121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.339 [2024-07-12 13:32:27.745233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.745249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.339 #16 NEW cov: 12118 ft: 14052 corp: 7/102b lim: 40 exec/s: 0 rss: 70Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:39.339 [2024-07-12 13:32:27.824306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4710000a cdw11:00847a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.824338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.339 [2024-07-12 13:32:27.824458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:7a7a8484 cdw11:8a848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.824473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.339 #17 NEW cov: 12118 ft: 14308 corp: 8/119b lim: 40 exec/s: 0 rss: 72Mb L: 17/36 MS: 1 InsertRepeatedBytes- 00:06:39.339 [2024-07-12 13:32:27.885301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4710000a cdw11:24000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.885329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.339 [2024-07-12 13:32:27.885447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:2e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.885462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.339 [2024-07-12 13:32:27.885582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.885598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.339 [2024-07-12 13:32:27.885709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.339 [2024-07-12 13:32:27.885725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.601 #18 NEW cov: 12118 ft: 14356 corp: 9/155b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 ChangeBinInt- 00:06:39.601 [2024-07-12 13:32:27.964452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.601 [2024-07-12 13:32:27.964480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.601 #19 NEW cov: 12118 ft: 14372 corp: 10/170b lim: 40 exec/s: 0 rss: 72Mb L: 15/36 MS: 1 CopyPart- 00:06:39.601 [2024-07-12 13:32:28.025000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.601 [2024-07-12 13:32:28.025029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.601 [2024-07-12 13:32:28.025148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:84840000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.601 [2024-07-12 13:32:28.025163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.601 #20 NEW cov: 12118 ft: 14421 corp: 11/189b lim: 40 exec/s: 0 rss: 72Mb L: 19/36 MS: 1 InsertRepeatedBytes- 00:06:39.601 [2024-07-12 13:32:28.084881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.601 [2024-07-12 13:32:28.084909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.601 #21 NEW cov: 12118 ft: 14435 corp: 12/202b lim: 40 exec/s: 0 rss: 72Mb L: 13/36 MS: 1 ChangeByte- 00:06:39.601 [2024-07-12 13:32:28.145078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.601 [2024-07-12 13:32:28.145105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.601 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:39.601 #27 NEW cov: 12141 ft: 14580 corp: 13/214b lim: 40 exec/s: 0 rss: 72Mb L: 12/36 MS: 1 EraseBytes- 00:06:39.862 [2024-07-12 13:32:28.205290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.862 [2024-07-12 13:32:28.205323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.862 #28 NEW cov: 12141 ft: 14612 corp: 14/227b lim: 40 exec/s: 0 rss: 72Mb L: 13/36 MS: 1 CopyPart- 00:06:39.862 [2024-07-12 13:32:28.265500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:0084840d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.862 [2024-07-12 13:32:28.265529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.862 #29 NEW cov: 12141 ft: 14615 corp: 15/240b lim: 40 exec/s: 29 rss: 72Mb L: 13/36 MS: 1 CMP- DE: "\015\000\000\000"- 00:06:39.862 [2024-07-12 13:32:28.336835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4710000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.862 [2024-07-12 13:32:28.336863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.862 [2024-07-12 13:32:28.336987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:8a848484 cdw11:2e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.862 [2024-07-12 13:32:28.337003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.862 [2024-07-12 13:32:28.337121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:24000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.862 [2024-07-12 13:32:28.337135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.862 [2024-07-12 13:32:28.337247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.862 [2024-07-12 13:32:28.337266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.862 #30 NEW cov: 12141 ft: 14637 corp: 16/276b lim: 40 exec/s: 30 rss: 72Mb L: 36/36 MS: 1 ChangeBinInt- 00:06:39.862 [2024-07-12 13:32:28.396042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1000000b cdw11:8484848a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.862 [2024-07-12 13:32:28.396074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.862 #34 NEW cov: 12141 ft: 14654 corp: 17/290b lim: 40 exec/s: 34 rss: 72Mb L: 14/36 MS: 4 EraseBytes-ChangeBit-ChangeBit-CrossOver- 00:06:40.123 [2024-07-12 13:32:28.466654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.466682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.123 [2024-07-12 13:32:28.466800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:84840000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.466815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.123 #35 NEW cov: 12141 ft: 14664 corp: 18/309b lim: 40 exec/s: 35 rss: 72Mb L: 19/36 MS: 1 ShuffleBytes- 00:06:40.123 [2024-07-12 13:32:28.546494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47000a00 cdw11:8400000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.546522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.123 #36 NEW cov: 12141 ft: 14674 corp: 19/322b lim: 40 exec/s: 36 rss: 72Mb L: 13/36 MS: 1 ShuffleBytes- 00:06:40.123 [2024-07-12 13:32:28.617164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00848447 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.617192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.123 [2024-07-12 13:32:28.617321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10000a00 cdw11:84848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.617337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.123 #37 NEW cov: 12141 ft: 14693 corp: 20/338b lim: 40 exec/s: 37 rss: 72Mb L: 16/36 MS: 1 CrossOver- 00:06:40.123 [2024-07-12 13:32:28.677708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47004d4d cdw11:4d4d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.677736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.123 [2024-07-12 13:32:28.677849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4d4d4d4d cdw11:4d4d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.677866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.123 [2024-07-12 13:32:28.677990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:4d4d4d00 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.123 [2024-07-12 13:32:28.678006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.382 #38 NEW cov: 12141 ft: 14894 corp: 21/368b lim: 40 exec/s: 38 rss: 72Mb L: 30/36 MS: 1 InsertRepeatedBytes- 00:06:40.382 [2024-07-12 13:32:28.737211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00868484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.382 [2024-07-12 13:32:28.737243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.382 #39 NEW cov: 12141 ft: 14902 corp: 22/381b lim: 40 exec/s: 39 rss: 72Mb L: 13/36 MS: 1 ChangeBit- 00:06:40.382 [2024-07-12 13:32:28.797839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00848047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.382 [2024-07-12 13:32:28.797869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.382 [2024-07-12 13:32:28.797985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10000a00 cdw11:84848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.382 [2024-07-12 13:32:28.798000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.382 #40 NEW cov: 12141 ft: 14910 corp: 23/397b lim: 40 exec/s: 40 rss: 72Mb L: 16/36 MS: 1 ChangeBit- 00:06:40.382 [2024-07-12 13:32:28.877703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4706000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.382 [2024-07-12 13:32:28.877733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.382 #41 NEW cov: 12141 ft: 14915 corp: 24/409b lim: 40 exec/s: 41 rss: 72Mb L: 12/36 MS: 1 ChangeBinInt- 00:06:40.382 [2024-07-12 13:32:28.948268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:0084840d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.382 [2024-07-12 13:32:28.948297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.382 [2024-07-12 13:32:28.948416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000084 cdw11:0a008484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.382 [2024-07-12 13:32:28.948434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.643 #42 NEW cov: 12141 ft: 14923 corp: 25/427b lim: 40 exec/s: 42 rss: 72Mb L: 18/36 MS: 1 CopyPart- 00:06:40.643 [2024-07-12 13:32:29.028193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47100028 cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.643 [2024-07-12 13:32:29.028223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.643 #43 NEW cov: 12141 ft: 14931 corp: 26/440b lim: 40 exec/s: 43 rss: 72Mb L: 13/36 MS: 1 ChangeByte- 00:06:40.643 [2024-07-12 13:32:29.088435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:0084842e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.643 [2024-07-12 13:32:29.088462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.643 #44 NEW cov: 12141 ft: 15033 corp: 27/448b lim: 40 exec/s: 44 rss: 72Mb L: 8/36 MS: 1 EraseBytes- 00:06:40.643 [2024-07-12 13:32:29.149389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:47004d4d cdw11:4d4d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.643 [2024-07-12 13:32:29.149416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.643 [2024-07-12 13:32:29.149535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:4d4d4d4d cdw11:4d4d4d4d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.643 [2024-07-12 13:32:29.149553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.643 [2024-07-12 13:32:29.149670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:4d4d4d00 cdw11:0a000800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.643 [2024-07-12 13:32:29.149685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.643 #45 NEW cov: 12141 ft: 15063 corp: 28/478b lim: 40 exec/s: 45 rss: 72Mb L: 30/36 MS: 1 ChangeBinInt- 00:06:40.904 [2024-07-12 13:32:29.228931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4700000a cdw11:00848484 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.904 [2024-07-12 13:32:29.228965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.904 #46 NEW cov: 12141 ft: 15070 corp: 29/489b lim: 40 exec/s: 23 rss: 72Mb L: 11/36 MS: 1 EraseBytes- 00:06:40.904 #46 DONE cov: 12141 ft: 15070 corp: 29/489b lim: 40 exec/s: 23 rss: 72Mb 00:06:40.904 ###### Recommended dictionary. ###### 00:06:40.904 "\000\000\000\000" # Uses: 0 00:06:40.904 "\015\000\000\000" # Uses: 0 00:06:40.904 ###### End of recommended dictionary. ###### 00:06:40.904 Done 46 runs in 2 second(s) 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.905 13:32:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:40.905 [2024-07-12 13:32:29.385712] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:40.905 [2024-07-12 13:32:29.385782] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2443533 ] 00:06:40.905 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.166 [2024-07-12 13:32:29.540136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.166 [2024-07-12 13:32:29.597177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.166 [2024-07-12 13:32:29.658928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.166 [2024-07-12 13:32:29.675234] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:41.166 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.166 INFO: Seed: 2560857193 00:06:41.166 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:41.166 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:41.166 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:41.166 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.166 #2 INITED exec/s: 0 rss: 64Mb 00:06:41.166 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.166 This may also happen if the target rejected all inputs we tried so far 00:06:41.166 [2024-07-12 13:32:29.742664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a4040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.166 [2024-07-12 13:32:29.742699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.166 [2024-07-12 13:32:29.742827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.166 [2024-07-12 13:32:29.742843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.426 NEW_FUNC[1/696]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:41.426 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.426 #14 NEW cov: 11895 ft: 11886 corp: 2/24b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:41.426 [2024-07-12 13:32:29.933389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.426 [2024-07-12 13:32:29.933437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.426 [2024-07-12 13:32:29.933563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.426 [2024-07-12 13:32:29.933582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.426 #15 NEW cov: 12025 ft: 12492 corp: 3/45b lim: 40 exec/s: 0 rss: 70Mb L: 21/23 MS: 1 EraseBytes- 00:06:41.687 [2024-07-12 13:32:30.014629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.687 [2024-07-12 13:32:30.014660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.687 [2024-07-12 13:32:30.014776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.687 [2024-07-12 13:32:30.014790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.687 [2024-07-12 13:32:30.014908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.687 [2024-07-12 13:32:30.014924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.687 [2024-07-12 13:32:30.015044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.687 [2024-07-12 13:32:30.015059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.688 [2024-07-12 13:32:30.015177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5af7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.688 [2024-07-12 13:32:30.015192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.688 #28 NEW cov: 12031 ft: 13158 corp: 4/85b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 3 ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:06:41.688 [2024-07-12 13:32:30.073381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.688 [2024-07-12 13:32:30.073413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.688 #29 NEW cov: 12116 ft: 14118 corp: 5/94b lim: 40 exec/s: 0 rss: 70Mb L: 9/40 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:41.688 [2024-07-12 13:32:30.134355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.688 [2024-07-12 13:32:30.134383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.688 [2024-07-12 13:32:30.134509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.688 [2024-07-12 13:32:30.134525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.688 [2024-07-12 13:32:30.134638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.688 [2024-07-12 13:32:30.134654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.688 #30 NEW cov: 12116 ft: 14436 corp: 6/123b lim: 40 exec/s: 0 rss: 70Mb L: 29/40 MS: 1 EraseBytes- 00:06:41.688 [2024-07-12 13:32:30.213828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.688 [2024-07-12 13:32:30.213859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.688 #31 NEW cov: 12116 ft: 14593 corp: 7/132b lim: 40 exec/s: 0 rss: 70Mb L: 9/40 MS: 1 ChangeBit- 00:06:41.949 [2024-07-12 13:32:30.294614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a4040 cdw11:40404840 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.294643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.949 [2024-07-12 13:32:30.294765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.294778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.949 #32 NEW cov: 12116 ft: 14669 corp: 8/155b lim: 40 exec/s: 0 rss: 70Mb L: 23/40 MS: 1 ChangeBit- 00:06:41.949 [2024-07-12 13:32:30.355888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.355915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.949 [2024-07-12 13:32:30.356026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.356042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.949 [2024-07-12 13:32:30.356163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.356178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.949 [2024-07-12 13:32:30.356301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.356318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.949 [2024-07-12 13:32:30.356441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5af7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.356456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.949 #33 NEW cov: 12116 ft: 14796 corp: 9/195b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:41.949 [2024-07-12 13:32:30.414590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01023a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.414622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.949 #34 NEW cov: 12116 ft: 14824 corp: 10/204b lim: 40 exec/s: 0 rss: 70Mb L: 9/40 MS: 1 ChangeByte- 00:06:41.949 [2024-07-12 13:32:30.485304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a4040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.485333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.949 [2024-07-12 13:32:30.485452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40424040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.949 [2024-07-12 13:32:30.485470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.949 #35 NEW cov: 12116 ft: 14912 corp: 11/227b lim: 40 exec/s: 0 rss: 70Mb L: 23/40 MS: 1 ChangeBit- 00:06:42.210 [2024-07-12 13:32:30.545137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.210 [2024-07-12 13:32:30.545166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.210 #36 NEW cov: 12116 ft: 14947 corp: 12/241b lim: 40 exec/s: 0 rss: 70Mb L: 14/40 MS: 1 CopyPart- 00:06:42.210 [2024-07-12 13:32:30.605343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a252500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.210 [2024-07-12 13:32:30.605373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.210 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:42.210 #39 NEW cov: 12139 ft: 15046 corp: 13/253b lim: 40 exec/s: 0 rss: 72Mb L: 12/40 MS: 3 ShuffleBytes-InsertRepeatedBytes-CrossOver- 00:06:42.210 [2024-07-12 13:32:30.667092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.210 [2024-07-12 13:32:30.667122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.210 [2024-07-12 13:32:30.667252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.210 [2024-07-12 13:32:30.667269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.210 [2024-07-12 13:32:30.667389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.210 [2024-07-12 13:32:30.667406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.210 [2024-07-12 13:32:30.667528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.210 [2024-07-12 13:32:30.667543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.210 [2024-07-12 13:32:30.667668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5af7 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.210 [2024-07-12 13:32:30.667684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.210 #40 NEW cov: 12139 ft: 15109 corp: 14/293b lim: 40 exec/s: 40 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:42.211 [2024-07-12 13:32:30.746287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a4040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.211 [2024-07-12 13:32:30.746317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.211 [2024-07-12 13:32:30.746440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.211 [2024-07-12 13:32:30.746454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.211 #41 NEW cov: 12139 ft: 15135 corp: 15/316b lim: 40 exec/s: 41 rss: 72Mb L: 23/40 MS: 1 ChangeByte- 00:06:42.471 [2024-07-12 13:32:30.806442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.471 [2024-07-12 13:32:30.806469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.471 [2024-07-12 13:32:30.806589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.471 [2024-07-12 13:32:30.806605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.471 #42 NEW cov: 12139 ft: 15145 corp: 16/338b lim: 40 exec/s: 42 rss: 72Mb L: 22/40 MS: 1 InsertByte- 00:06:42.471 [2024-07-12 13:32:30.876782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01020001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.472 [2024-07-12 13:32:30.876810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.472 [2024-07-12 13:32:30.876929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.472 [2024-07-12 13:32:30.876945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.472 #43 NEW cov: 12139 ft: 15198 corp: 17/355b lim: 40 exec/s: 43 rss: 72Mb L: 17/40 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:42.472 [2024-07-12 13:32:30.936734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a252500 cdw11:00000a25 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.472 [2024-07-12 13:32:30.936763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.472 #44 NEW cov: 12139 ft: 15211 corp: 18/364b lim: 40 exec/s: 44 rss: 72Mb L: 9/40 MS: 1 EraseBytes- 00:06:42.472 [2024-07-12 13:32:31.017399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40834040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.472 [2024-07-12 13:32:31.017427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.472 [2024-07-12 13:32:31.017545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.472 [2024-07-12 13:32:31.017562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.472 #45 NEW cov: 12139 ft: 15224 corp: 19/386b lim: 40 exec/s: 45 rss: 72Mb L: 22/40 MS: 1 InsertByte- 00:06:42.732 [2024-07-12 13:32:31.077588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.077616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.732 [2024-07-12 13:32:31.077736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.077752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.732 #46 NEW cov: 12139 ft: 15248 corp: 20/407b lim: 40 exec/s: 46 rss: 72Mb L: 21/40 MS: 1 ChangeByte- 00:06:42.732 [2024-07-12 13:32:31.137789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40834040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.137818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.732 [2024-07-12 13:32:31.137940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.137955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.732 #52 NEW cov: 12139 ft: 15263 corp: 21/429b lim: 40 exec/s: 52 rss: 72Mb L: 22/40 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:42.732 [2024-07-12 13:32:31.209181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40834040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.209209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.732 [2024-07-12 13:32:31.209333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.209348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.732 [2024-07-12 13:32:31.209459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000040 cdw11:40404083 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.209474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.732 [2024-07-12 13:32:31.209593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:40404040 cdw11:40400100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.209608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.732 [2024-07-12 13:32:31.209731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00004040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.209748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.732 #53 NEW cov: 12139 ft: 15275 corp: 22/469b lim: 40 exec/s: 53 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:42.732 [2024-07-12 13:32:31.288025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00252500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.732 [2024-07-12 13:32:31.288053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.993 #54 NEW cov: 12139 ft: 15303 corp: 23/481b lim: 40 exec/s: 54 rss: 72Mb L: 12/40 MS: 1 CopyPart- 00:06:42.994 [2024-07-12 13:32:31.348919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5a5a000a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.348949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.994 [2024-07-12 13:32:31.349066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.349082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.994 [2024-07-12 13:32:31.349195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.349209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.994 #55 NEW cov: 12139 ft: 15322 corp: 24/512b lim: 40 exec/s: 55 rss: 72Mb L: 31/40 MS: 1 CMP- DE: "\000\012"- 00:06:42.994 [2024-07-12 13:32:31.418852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a4040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.418879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.994 [2024-07-12 13:32:31.418994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.419010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.994 [2024-07-12 13:32:31.479084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a4040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.479113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.994 [2024-07-12 13:32:31.479234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:4040403e cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.479250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.994 #57 NEW cov: 12139 ft: 15327 corp: 25/535b lim: 40 exec/s: 57 rss: 72Mb L: 23/40 MS: 2 CopyPart-ChangeByte- 00:06:42.994 [2024-07-12 13:32:31.538924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.994 [2024-07-12 13:32:31.538952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.994 #58 NEW cov: 12139 ft: 15338 corp: 26/549b lim: 40 exec/s: 58 rss: 72Mb L: 14/40 MS: 1 InsertRepeatedBytes- 00:06:43.255 [2024-07-12 13:32:31.600357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40834040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.600385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.255 [2024-07-12 13:32:31.600502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:40fbfbfb cdw11:fbfbfbfb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.600518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.255 [2024-07-12 13:32:31.600639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:fbfbfb40 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.600654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.255 [2024-07-12 13:32:31.600766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:40404040 cdw11:40404040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.600785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.255 #59 NEW cov: 12139 ft: 15402 corp: 27/581b lim: 40 exec/s: 59 rss: 72Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:06:43.255 [2024-07-12 13:32:31.660558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.660586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.255 [2024-07-12 13:32:31.660709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.660725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.255 [2024-07-12 13:32:31.660841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.660854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.255 [2024-07-12 13:32:31.660971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:5a5a5a5a cdw11:5a5a5a5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.660986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.255 #60 NEW cov: 12139 ft: 15408 corp: 28/614b lim: 40 exec/s: 60 rss: 72Mb L: 33/40 MS: 1 CrossOver- 00:06:43.255 [2024-07-12 13:32:31.739786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a404040 cdw11:40834040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.255 [2024-07-12 13:32:31.739815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.255 #62 NEW cov: 12139 ft: 15410 corp: 29/628b lim: 40 exec/s: 31 rss: 72Mb L: 14/40 MS: 2 ChangeByte-CrossOver- 00:06:43.255 #62 DONE cov: 12139 ft: 15410 corp: 29/628b lim: 40 exec/s: 31 rss: 72Mb 00:06:43.255 ###### Recommended dictionary. ###### 00:06:43.255 "\001\000\000\000\000\000\000\000" # Uses: 2 00:06:43.255 "\000\012" # Uses: 0 00:06:43.255 ###### End of recommended dictionary. ###### 00:06:43.255 Done 62 runs in 2 second(s) 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:43.515 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.516 13:32:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:43.516 [2024-07-12 13:32:31.899341] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:43.516 [2024-07-12 13:32:31.899411] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2443907 ] 00:06:43.516 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.516 [2024-07-12 13:32:32.049410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.776 [2024-07-12 13:32:32.102947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.776 [2024-07-12 13:32:32.164490] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.776 [2024-07-12 13:32:32.180787] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:43.776 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.776 INFO: Seed: 771883068 00:06:43.776 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:43.776 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:43.776 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:43.776 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.776 #2 INITED exec/s: 0 rss: 64Mb 00:06:43.776 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.776 This may also happen if the target rejected all inputs we tried so far 00:06:43.776 [2024-07-12 13:32:32.236024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.776 [2024-07-12 13:32:32.236054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.776 [2024-07-12 13:32:32.236103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.776 [2024-07-12 13:32:32.236113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.776 [2024-07-12 13:32:32.236157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.776 [2024-07-12 13:32:32.236168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.037 NEW_FUNC[1/695]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:44.037 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.037 #11 NEW cov: 11883 ft: 11877 corp: 2/26b lim: 40 exec/s: 0 rss: 70Mb L: 25/25 MS: 4 CopyPart-CrossOver-CopyPart-InsertRepeatedBytes- 00:06:44.037 [2024-07-12 13:32:32.417007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1c484848 cdw11:48484848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.417062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.417145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:48484848 cdw11:48484848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.417166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.417245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:48484848 cdw11:48484848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.417265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.417338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:48484848 cdw11:48484848 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.417356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.037 #19 NEW cov: 12013 ft: 12897 corp: 3/61b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:06:44.037 [2024-07-12 13:32:32.476763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.476789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.476838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:0affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.476849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.476899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.476909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.476955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.476964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.037 #20 NEW cov: 12019 ft: 13215 corp: 4/95b lim: 40 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:06:44.037 [2024-07-12 13:32:32.536810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.536835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.536883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.536894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.037 [2024-07-12 13:32:32.536942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.037 [2024-07-12 13:32:32.536953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.037 #21 NEW cov: 12104 ft: 13469 corp: 5/125b lim: 40 exec/s: 0 rss: 70Mb L: 30/35 MS: 1 CrossOver- 00:06:44.038 [2024-07-12 13:32:32.586946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.038 [2024-07-12 13:32:32.586971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.038 [2024-07-12 13:32:32.587021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.038 [2024-07-12 13:32:32.587033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.038 [2024-07-12 13:32:32.587081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.038 [2024-07-12 13:32:32.587092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.298 #22 NEW cov: 12104 ft: 13605 corp: 6/155b lim: 40 exec/s: 0 rss: 70Mb L: 30/35 MS: 1 ShuffleBytes- 00:06:44.298 [2024-07-12 13:32:32.647199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.647224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.647278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.647289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.647335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.647345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.647393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.647403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.299 #23 NEW cov: 12104 ft: 13736 corp: 7/189b lim: 40 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:06:44.299 [2024-07-12 13:32:32.697356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.697380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.697431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.697442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.697485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.697495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.697542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.697552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.299 #24 NEW cov: 12104 ft: 13788 corp: 8/224b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:44.299 [2024-07-12 13:32:32.757395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.757423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.757471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.757481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.757526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.757536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.299 #25 NEW cov: 12104 ft: 13813 corp: 9/255b lim: 40 exec/s: 0 rss: 70Mb L: 31/35 MS: 1 EraseBytes- 00:06:44.299 [2024-07-12 13:32:32.817667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a7affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.817691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.817741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:0affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.817752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.817799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.817809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.817855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.817865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.299 #26 NEW cov: 12104 ft: 13885 corp: 10/289b lim: 40 exec/s: 0 rss: 72Mb L: 34/35 MS: 1 ChangeByte- 00:06:44.299 [2024-07-12 13:32:32.877694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.877717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.877766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.877776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.299 [2024-07-12 13:32:32.877823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.299 [2024-07-12 13:32:32.877833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.564 #27 NEW cov: 12104 ft: 13919 corp: 11/319b lim: 40 exec/s: 0 rss: 72Mb L: 30/35 MS: 1 CopyPart- 00:06:44.564 [2024-07-12 13:32:32.917904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:32.917928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:32.917976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:32.917990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:32.918036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:32.918047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:32.918094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:32.918104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.564 #28 NEW cov: 12104 ft: 14028 corp: 12/355b lim: 40 exec/s: 0 rss: 72Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:06:44.564 [2024-07-12 13:32:32.977961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:120a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:32.977984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:32.978034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:32.978044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:32.978091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:32.978101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.564 #29 NEW cov: 12104 ft: 14044 corp: 13/385b lim: 40 exec/s: 0 rss: 72Mb L: 30/36 MS: 1 ChangeBinInt- 00:06:44.564 [2024-07-12 13:32:33.028106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:120a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.028130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.028180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.028191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.028241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.028252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.564 #30 NEW cov: 12104 ft: 14108 corp: 14/416b lim: 40 exec/s: 0 rss: 72Mb L: 31/36 MS: 1 InsertByte- 00:06:44.564 [2024-07-12 13:32:33.088348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.088373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.088419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.088429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.088474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.088488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.088534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.088545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.564 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:44.564 #31 NEW cov: 12127 ft: 14145 corp: 15/451b lim: 40 exec/s: 0 rss: 72Mb L: 35/36 MS: 1 InsertRepeatedBytes- 00:06:44.564 [2024-07-12 13:32:33.138523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.138551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.138599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.138610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.138656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.138667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.564 [2024-07-12 13:32:33.138713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.564 [2024-07-12 13:32:33.138723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.826 #32 NEW cov: 12127 ft: 14176 corp: 16/488b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 CopyPart- 00:06:44.826 [2024-07-12 13:32:33.198658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.198682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.198729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.198740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.198786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.198797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.198843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.198853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.826 #33 NEW cov: 12127 ft: 14191 corp: 17/525b lim: 40 exec/s: 33 rss: 72Mb L: 37/37 MS: 1 ChangeBit- 00:06:44.826 [2024-07-12 13:32:33.258722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.258750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.258800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.258811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.258862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.258873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.826 #34 NEW cov: 12127 ft: 14215 corp: 18/556b lim: 40 exec/s: 34 rss: 72Mb L: 31/37 MS: 1 InsertByte- 00:06:44.826 [2024-07-12 13:32:33.298768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.298793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.298841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.298851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.298899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.298909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.826 #35 NEW cov: 12127 ft: 14237 corp: 19/587b lim: 40 exec/s: 35 rss: 72Mb L: 31/37 MS: 1 InsertByte- 00:06:44.826 [2024-07-12 13:32:33.339020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.339044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.339091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff8dffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.339101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.339149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.339160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.339207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.339217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.826 #36 NEW cov: 12127 ft: 14253 corp: 20/622b lim: 40 exec/s: 36 rss: 72Mb L: 35/37 MS: 1 InsertByte- 00:06:44.826 [2024-07-12 13:32:33.389132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.389155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.389204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:0affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.389217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.389266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.389277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.826 [2024-07-12 13:32:33.389323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff3f cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.826 [2024-07-12 13:32:33.389333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.087 #37 NEW cov: 12127 ft: 14293 corp: 21/657b lim: 40 exec/s: 37 rss: 72Mb L: 35/37 MS: 1 InsertByte- 00:06:45.087 [2024-07-12 13:32:33.439199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.439223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.087 [2024-07-12 13:32:33.439275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.439294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.087 [2024-07-12 13:32:33.439342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff08ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.439352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.087 #38 NEW cov: 12127 ft: 14314 corp: 22/688b lim: 40 exec/s: 38 rss: 72Mb L: 31/37 MS: 1 ChangeBinInt- 00:06:45.087 [2024-07-12 13:32:33.499243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.499268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.087 [2024-07-12 13:32:33.499317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.499327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.087 #39 NEW cov: 12127 ft: 14561 corp: 23/710b lim: 40 exec/s: 39 rss: 72Mb L: 22/37 MS: 1 EraseBytes- 00:06:45.087 [2024-07-12 13:32:33.549567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.549590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.087 [2024-07-12 13:32:33.549639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff8dffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.549650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.087 [2024-07-12 13:32:33.549694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.549704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.087 [2024-07-12 13:32:33.549752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.549766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.087 #40 NEW cov: 12127 ft: 14603 corp: 24/745b lim: 40 exec/s: 40 rss: 72Mb L: 35/37 MS: 1 ShuffleBytes- 00:06:45.087 [2024-07-12 13:32:33.609851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.087 [2024-07-12 13:32:33.609874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.088 [2024-07-12 13:32:33.609922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff17 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.088 [2024-07-12 13:32:33.609932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.088 [2024-07-12 13:32:33.609978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff8dff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.088 [2024-07-12 13:32:33.609988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.088 [2024-07-12 13:32:33.610036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.088 [2024-07-12 13:32:33.610046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.088 [2024-07-12 13:32:33.610094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.088 [2024-07-12 13:32:33.610104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.088 #41 NEW cov: 12127 ft: 14673 corp: 25/785b lim: 40 exec/s: 41 rss: 72Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:45.088 [2024-07-12 13:32:33.659749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.088 [2024-07-12 13:32:33.659773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.088 [2024-07-12 13:32:33.659822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff1cffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.088 [2024-07-12 13:32:33.659833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.088 [2024-07-12 13:32:33.659882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.088 [2024-07-12 13:32:33.659893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.349 #42 NEW cov: 12127 ft: 14715 corp: 26/815b lim: 40 exec/s: 42 rss: 72Mb L: 30/40 MS: 1 ChangeByte- 00:06:45.349 [2024-07-12 13:32:33.699897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.699922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.349 [2024-07-12 13:32:33.699970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.699980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.349 [2024-07-12 13:32:33.700031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.700041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.349 #43 NEW cov: 12127 ft: 14727 corp: 27/845b lim: 40 exec/s: 43 rss: 72Mb L: 30/40 MS: 1 ShuffleBytes- 00:06:45.349 [2024-07-12 13:32:33.740119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.740144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.349 [2024-07-12 13:32:33.740190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff8dffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.740201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.349 [2024-07-12 13:32:33.740249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.740259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.349 [2024-07-12 13:32:33.740306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.740317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.349 #44 NEW cov: 12127 ft: 14732 corp: 28/880b lim: 40 exec/s: 44 rss: 73Mb L: 35/40 MS: 1 ShuffleBytes- 00:06:45.349 [2024-07-12 13:32:33.800264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.800289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.349 [2024-07-12 13:32:33.800340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:0affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.349 [2024-07-12 13:32:33.800351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.349 [2024-07-12 13:32:33.800399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffbfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.800409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.350 [2024-07-12 13:32:33.800458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff3f cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.800468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.350 #45 NEW cov: 12127 ft: 14798 corp: 29/915b lim: 40 exec/s: 45 rss: 73Mb L: 35/40 MS: 1 ChangeBit- 00:06:45.350 [2024-07-12 13:32:33.860321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.860345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.350 [2024-07-12 13:32:33.860394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.860404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.350 [2024-07-12 13:32:33.860453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.860464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.350 #46 NEW cov: 12127 ft: 14811 corp: 30/945b lim: 40 exec/s: 46 rss: 73Mb L: 30/40 MS: 1 EraseBytes- 00:06:45.350 [2024-07-12 13:32:33.910556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.910581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.350 [2024-07-12 13:32:33.910629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff8dffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.910639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.350 [2024-07-12 13:32:33.910687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.910697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.350 [2024-07-12 13:32:33.910746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.350 [2024-07-12 13:32:33.910756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.611 #47 NEW cov: 12127 ft: 14826 corp: 31/980b lim: 40 exec/s: 47 rss: 73Mb L: 35/40 MS: 1 ShuffleBytes- 00:06:45.611 [2024-07-12 13:32:33.970726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:fffeffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:33.970750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:33.970799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:33.970810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:33.970858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:33.970869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:33.970915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:33.970925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.611 #48 NEW cov: 12127 ft: 14843 corp: 32/1017b lim: 40 exec/s: 48 rss: 73Mb L: 37/40 MS: 1 ChangeBit- 00:06:45.611 [2024-07-12 13:32:34.010727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.010751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.010801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.010815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.010860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.010870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.611 #49 NEW cov: 12127 ft: 14879 corp: 33/1048b lim: 40 exec/s: 49 rss: 73Mb L: 31/40 MS: 1 CMP- DE: "\000\002\000\000"- 00:06:45.611 [2024-07-12 13:32:34.070874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.070899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.070950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.070961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.071006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.071017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.611 #50 NEW cov: 12127 ft: 14883 corp: 34/1078b lim: 40 exec/s: 50 rss: 73Mb L: 30/40 MS: 1 ChangeByte- 00:06:45.611 [2024-07-12 13:32:34.131141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffff4cff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.131166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.131213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff8dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.131223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.131277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.131287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.131333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.131343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.611 #51 NEW cov: 12127 ft: 14927 corp: 35/1114b lim: 40 exec/s: 51 rss: 73Mb L: 36/40 MS: 1 InsertByte- 00:06:45.611 [2024-07-12 13:32:34.171109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0a0a73 cdw11:0affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.171133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.171181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.171191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.611 [2024-07-12 13:32:34.171243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.611 [2024-07-12 13:32:34.171257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.872 #52 NEW cov: 12127 ft: 14944 corp: 36/1145b lim: 40 exec/s: 52 rss: 73Mb L: 31/40 MS: 1 InsertByte- 00:06:45.872 [2024-07-12 13:32:34.211313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.872 [2024-07-12 13:32:34.211337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.872 [2024-07-12 13:32:34.211385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:0affffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.872 [2024-07-12 13:32:34.211395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.872 [2024-07-12 13:32:34.211441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffbfff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.872 [2024-07-12 13:32:34.211452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.872 [2024-07-12 13:32:34.211496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffff7f3f cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.872 [2024-07-12 13:32:34.211506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.872 #53 NEW cov: 12127 ft: 14956 corp: 37/1180b lim: 40 exec/s: 26 rss: 73Mb L: 35/40 MS: 1 ChangeBit- 00:06:45.872 #53 DONE cov: 12127 ft: 14956 corp: 37/1180b lim: 40 exec/s: 26 rss: 73Mb 00:06:45.872 ###### Recommended dictionary. ###### 00:06:45.872 "\000\002\000\000" # Uses: 0 00:06:45.872 ###### End of recommended dictionary. ###### 00:06:45.872 Done 53 runs in 2 second(s) 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.872 13:32:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:45.872 [2024-07-12 13:32:34.387867] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:45.872 [2024-07-12 13:32:34.387950] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444547 ] 00:06:45.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.133 [2024-07-12 13:32:34.541947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.133 [2024-07-12 13:32:34.594527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.133 [2024-07-12 13:32:34.655904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.133 [2024-07-12 13:32:34.672251] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:46.133 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.133 INFO: Seed: 3261892867 00:06:46.133 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:46.133 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:46.133 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:46.133 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.133 #2 INITED exec/s: 0 rss: 63Mb 00:06:46.133 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.133 This may also happen if the target rejected all inputs we tried so far 00:06:46.394 [2024-07-12 13:32:34.727689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.394 [2024-07-12 13:32:34.727726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.394 [2024-07-12 13:32:34.727783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.394 [2024-07-12 13:32:34.727797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.394 [2024-07-12 13:32:34.727855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.394 [2024-07-12 13:32:34.727869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.394 NEW_FUNC[1/696]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:46.394 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:46.394 #10 NEW cov: 11884 ft: 11884 corp: 2/28b lim: 35 exec/s: 0 rss: 68Mb L: 27/27 MS: 3 ShuffleBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:06:46.394 [2024-07-12 13:32:34.908500] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.394 [2024-07-12 13:32:34.908562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.394 [2024-07-12 13:32:34.908646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.394 [2024-07-12 13:32:34.908668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.394 [2024-07-12 13:32:34.908750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.394 [2024-07-12 13:32:34.908770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.394 #16 NEW cov: 12014 ft: 12570 corp: 3/55b lim: 35 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 ShuffleBytes- 00:06:46.654 [2024-07-12 13:32:34.977911] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:34.977936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.654 #19 NEW cov: 12020 ft: 13471 corp: 4/63b lim: 35 exec/s: 0 rss: 69Mb L: 8/27 MS: 3 InsertByte-EraseBytes-CrossOver- 00:06:46.654 [2024-07-12 13:32:35.028318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.028347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.654 [2024-07-12 13:32:35.028395] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.028406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.654 [2024-07-12 13:32:35.028452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.028463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.654 #20 NEW cov: 12105 ft: 13713 corp: 5/89b lim: 35 exec/s: 0 rss: 69Mb L: 26/27 MS: 1 EraseBytes- 00:06:46.654 [2024-07-12 13:32:35.078478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.078504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.654 [2024-07-12 13:32:35.078555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.078565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.654 [2024-07-12 13:32:35.078615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.078626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.654 #21 NEW cov: 12105 ft: 13815 corp: 6/116b lim: 35 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 ChangeByte- 00:06:46.654 [2024-07-12 13:32:35.138338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.138363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.654 #22 NEW cov: 12105 ft: 13903 corp: 7/124b lim: 35 exec/s: 0 rss: 69Mb L: 8/27 MS: 1 ChangeByte- 00:06:46.654 [2024-07-12 13:32:35.198481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.654 [2024-07-12 13:32:35.198506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.654 #23 NEW cov: 12105 ft: 13961 corp: 8/133b lim: 35 exec/s: 0 rss: 69Mb L: 9/27 MS: 1 CrossOver- 00:06:46.915 [2024-07-12 13:32:35.249052] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.249076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.915 [2024-07-12 13:32:35.249127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.249139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.915 [2024-07-12 13:32:35.249191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.249201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.915 [2024-07-12 13:32:35.249256] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.249267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.915 #24 NEW cov: 12105 ft: 14286 corp: 9/164b lim: 35 exec/s: 0 rss: 69Mb L: 31/31 MS: 1 CrossOver- 00:06:46.915 [2024-07-12 13:32:35.309400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.309426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.915 [2024-07-12 13:32:35.309477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.309490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.915 [2024-07-12 13:32:35.309540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.309553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.915 [2024-07-12 13:32:35.309601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.309613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.915 [2024-07-12 13:32:35.309659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.309671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.915 #26 NEW cov: 12105 ft: 14401 corp: 10/199b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:46.915 NEW_FUNC[1/2]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:46.915 NEW_FUNC[2/2]: 0x11f0900 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:06:46.915 #29 NEW cov: 12138 ft: 14477 corp: 11/211b lim: 35 exec/s: 0 rss: 69Mb L: 12/35 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:06:46.915 [2024-07-12 13:32:35.409360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.915 [2024-07-12 13:32:35.409390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.916 [2024-07-12 13:32:35.409440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.916 [2024-07-12 13:32:35.409451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.916 [2024-07-12 13:32:35.409498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.916 [2024-07-12 13:32:35.409508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.916 #30 NEW cov: 12138 ft: 14516 corp: 12/237b lim: 35 exec/s: 0 rss: 69Mb L: 26/35 MS: 1 CopyPart- 00:06:47.176 #31 NEW cov: 12138 ft: 14640 corp: 13/250b lim: 35 exec/s: 0 rss: 69Mb L: 13/35 MS: 1 InsertByte- 00:06:47.176 [2024-07-12 13:32:35.529376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.176 [2024-07-12 13:32:35.529403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.176 #32 NEW cov: 12138 ft: 14653 corp: 14/262b lim: 35 exec/s: 0 rss: 70Mb L: 12/35 MS: 1 CMP- DE: "\377\377\377\036"- 00:06:47.176 [2024-07-12 13:32:35.589794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.176 [2024-07-12 13:32:35.589821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.176 [2024-07-12 13:32:35.589874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.176 [2024-07-12 13:32:35.589884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.176 [2024-07-12 13:32:35.589935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.176 [2024-07-12 13:32:35.589946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.176 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:47.176 #33 NEW cov: 12161 ft: 14709 corp: 15/288b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 ChangeBit- 00:06:47.176 [2024-07-12 13:32:35.649995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.177 [2024-07-12 13:32:35.650024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.177 [2024-07-12 13:32:35.650076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.177 [2024-07-12 13:32:35.650087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.177 [2024-07-12 13:32:35.650135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.177 [2024-07-12 13:32:35.650146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.177 #34 NEW cov: 12161 ft: 14728 corp: 16/314b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 ChangeBit- 00:06:47.177 [2024-07-12 13:32:35.689792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.177 [2024-07-12 13:32:35.689817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.177 #35 NEW cov: 12161 ft: 14758 corp: 17/323b lim: 35 exec/s: 35 rss: 70Mb L: 9/35 MS: 1 InsertRepeatedBytes- 00:06:47.177 [2024-07-12 13:32:35.729900] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.177 [2024-07-12 13:32:35.729927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.438 #36 NEW cov: 12161 ft: 14781 corp: 18/335b lim: 35 exec/s: 36 rss: 70Mb L: 12/35 MS: 1 ChangeByte- 00:06:47.438 [2024-07-12 13:32:35.790515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.790542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.790591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.790606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.790653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.790663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.790711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.790722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.438 #37 NEW cov: 12161 ft: 14792 corp: 19/369b lim: 35 exec/s: 37 rss: 70Mb L: 34/35 MS: 1 CrossOver- 00:06:47.438 [2024-07-12 13:32:35.840640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.840666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.840715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.840728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.840775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.840786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.840834] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.840844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.438 #38 NEW cov: 12161 ft: 14803 corp: 20/400b lim: 35 exec/s: 38 rss: 70Mb L: 31/35 MS: 1 ShuffleBytes- 00:06:47.438 [2024-07-12 13:32:35.900534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.900560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.900615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.900626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.438 #39 NEW cov: 12164 ft: 15071 corp: 21/418b lim: 35 exec/s: 39 rss: 70Mb L: 18/35 MS: 1 EraseBytes- 00:06:47.438 [2024-07-12 13:32:35.951131] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.951158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.951205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.951217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.951270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.951283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.951335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.951348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:35.951399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:35.951410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.438 #40 NEW cov: 12164 ft: 15104 corp: 22/453b lim: 35 exec/s: 40 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:06:47.438 [2024-07-12 13:32:36.010837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:36.010863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.438 [2024-07-12 13:32:36.010916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.438 [2024-07-12 13:32:36.010927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.698 #41 NEW cov: 12164 ft: 15125 corp: 23/471b lim: 35 exec/s: 41 rss: 70Mb L: 18/35 MS: 1 ShuffleBytes- 00:06:47.699 [2024-07-12 13:32:36.070998] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.071022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.071074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.071085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.699 #42 NEW cov: 12164 ft: 15132 corp: 24/488b lim: 35 exec/s: 42 rss: 70Mb L: 17/35 MS: 1 CMP- DE: "8*\015l\011\033'\000"- 00:06:47.699 [2024-07-12 13:32:36.121513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.121539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.121588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.121601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.121652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.121664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.121713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.121725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.121774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.121786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.699 #43 NEW cov: 12164 ft: 15164 corp: 25/523b lim: 35 exec/s: 43 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:47.699 [2024-07-12 13:32:36.171239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.171268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.171321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.171331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.699 #44 NEW cov: 12164 ft: 15168 corp: 26/541b lim: 35 exec/s: 44 rss: 70Mb L: 18/35 MS: 1 ChangeBinInt- 00:06:47.699 [2024-07-12 13:32:36.231738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.231764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.231813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.231825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.231871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.231881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.699 [2024-07-12 13:32:36.231929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.699 [2024-07-12 13:32:36.231939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.699 #45 NEW cov: 12164 ft: 15177 corp: 27/575b lim: 35 exec/s: 45 rss: 70Mb L: 34/35 MS: 1 ChangeByte- 00:06:47.960 [2024-07-12 13:32:36.291806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.291831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.960 [2024-07-12 13:32:36.291880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.291893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.960 [2024-07-12 13:32:36.291943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.291954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.960 [2024-07-12 13:32:36.292002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.292012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.960 #46 NEW cov: 12164 ft: 15190 corp: 28/606b lim: 35 exec/s: 46 rss: 70Mb L: 31/35 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:47.960 [2024-07-12 13:32:36.341682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.341706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.960 [2024-07-12 13:32:36.341758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000027 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.341768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.960 #47 NEW cov: 12164 ft: 15203 corp: 29/623b lim: 35 exec/s: 47 rss: 70Mb L: 17/35 MS: 1 ChangeByte- 00:06:47.960 [2024-07-12 13:32:36.401982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.402008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.960 [2024-07-12 13:32:36.402054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.402065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.960 [2024-07-12 13:32:36.402114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.402125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.960 #48 NEW cov: 12164 ft: 15282 corp: 30/649b lim: 35 exec/s: 48 rss: 70Mb L: 26/35 MS: 1 ChangeByte- 00:06:47.960 [2024-07-12 13:32:36.442095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.960 [2024-07-12 13:32:36.442121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.960 [2024-07-12 13:32:36.442172] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.961 [2024-07-12 13:32:36.442183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.961 [2024-07-12 13:32:36.442238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.961 [2024-07-12 13:32:36.442249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.961 #49 NEW cov: 12164 ft: 15307 corp: 31/675b lim: 35 exec/s: 49 rss: 70Mb L: 26/35 MS: 1 PersAutoDict- DE: "\377\377\377\036"- 00:06:47.961 [2024-07-12 13:32:36.502272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.961 [2024-07-12 13:32:36.502300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.961 [2024-07-12 13:32:36.502350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.961 [2024-07-12 13:32:36.502361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.961 [2024-07-12 13:32:36.502410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.961 [2024-07-12 13:32:36.502421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.221 #50 NEW cov: 12164 ft: 15350 corp: 32/701b lim: 35 exec/s: 50 rss: 72Mb L: 26/35 MS: 1 ShuffleBytes- 00:06:48.221 [2024-07-12 13:32:36.562397] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.221 [2024-07-12 13:32:36.562423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.221 [2024-07-12 13:32:36.562477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.221 [2024-07-12 13:32:36.562487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.221 [2024-07-12 13:32:36.562535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.221 [2024-07-12 13:32:36.562549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.221 #51 NEW cov: 12164 ft: 15356 corp: 33/727b lim: 35 exec/s: 51 rss: 72Mb L: 26/35 MS: 1 ChangeBinInt- 00:06:48.221 [2024-07-12 13:32:36.602501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.221 [2024-07-12 13:32:36.602527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.221 [2024-07-12 13:32:36.602579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.221 [2024-07-12 13:32:36.602590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.221 [2024-07-12 13:32:36.602640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.221 [2024-07-12 13:32:36.602650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.221 #52 NEW cov: 12164 ft: 15362 corp: 34/753b lim: 35 exec/s: 52 rss: 72Mb L: 26/35 MS: 1 ShuffleBytes- 00:06:48.221 [2024-07-12 13:32:36.652795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.221 [2024-07-12 13:32:36.652822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.222 [2024-07-12 13:32:36.652873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.652884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.222 [2024-07-12 13:32:36.652936] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.652947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.222 [2024-07-12 13:32:36.652994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.653004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.222 #53 NEW cov: 12164 ft: 15395 corp: 35/787b lim: 35 exec/s: 53 rss: 72Mb L: 34/35 MS: 1 CopyPart- 00:06:48.222 [2024-07-12 13:32:36.703052] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000db SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.703078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.222 [2024-07-12 13:32:36.703131] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.703142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.222 [2024-07-12 13:32:36.703192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.703202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.222 [2024-07-12 13:32:36.703264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.703275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.222 [2024-07-12 13:32:36.703327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.222 [2024-07-12 13:32:36.703338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.222 #54 NEW cov: 12164 ft: 15411 corp: 36/822b lim: 35 exec/s: 27 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:06:48.222 #54 DONE cov: 12164 ft: 15411 corp: 36/822b lim: 35 exec/s: 27 rss: 72Mb 00:06:48.222 ###### Recommended dictionary. ###### 00:06:48.222 "\377\377\377\036" # Uses: 1 00:06:48.222 "8*\015l\011\033'\000" # Uses: 0 00:06:48.222 "\377\377\377\377\377\377\377\377" # Uses: 1 00:06:48.222 ###### End of recommended dictionary. ###### 00:06:48.222 Done 54 runs in 2 second(s) 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.482 13:32:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:48.482 [2024-07-12 13:32:36.874914] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:48.482 [2024-07-12 13:32:36.874990] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444937 ] 00:06:48.482 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.482 [2024-07-12 13:32:37.029049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.742 [2024-07-12 13:32:37.085276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.742 [2024-07-12 13:32:37.147091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.742 [2024-07-12 13:32:37.163424] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:48.742 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.742 INFO: Seed: 1456920218 00:06:48.742 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:48.742 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:48.742 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:48.742 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.742 #2 INITED exec/s: 0 rss: 64Mb 00:06:48.742 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:48.742 This may also happen if the target rejected all inputs we tried so far 00:06:48.742 [2024-07-12 13:32:37.223367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.742 [2024-07-12 13:32:37.223404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.003 NEW_FUNC[1/695]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:49.003 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.003 #6 NEW cov: 11865 ft: 11865 corp: 2/8b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 4 ChangeBit-CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:49.003 [2024-07-12 13:32:37.413824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.003 [2024-07-12 13:32:37.413878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.003 #16 NEW cov: 11995 ft: 12368 corp: 3/15b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 5 EraseBytes-CopyPart-InsertByte-ChangeBit-InsertByte- 00:06:49.003 [2024-07-12 13:32:37.493945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.003 [2024-07-12 13:32:37.493977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.003 #17 NEW cov: 12001 ft: 12630 corp: 4/22b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeByte- 00:06:49.003 [2024-07-12 13:32:37.554130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.003 [2024-07-12 13:32:37.554160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.003 #18 NEW cov: 12086 ft: 13000 corp: 5/30b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertByte- 00:06:49.264 [2024-07-12 13:32:37.614251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.264 [2024-07-12 13:32:37.614282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.264 #20 NEW cov: 12086 ft: 13154 corp: 6/38b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 2 ChangeByte-CrossOver- 00:06:49.264 [2024-07-12 13:32:37.674613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.264 [2024-07-12 13:32:37.674642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.264 #21 NEW cov: 12086 ft: 13189 corp: 7/51b lim: 35 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CopyPart- 00:06:49.264 [2024-07-12 13:32:37.744777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.264 [2024-07-12 13:32:37.744808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.264 #22 NEW cov: 12086 ft: 13276 corp: 8/59b lim: 35 exec/s: 0 rss: 70Mb L: 8/13 MS: 1 ChangeByte- 00:06:49.264 [2024-07-12 13:32:37.815046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.264 [2024-07-12 13:32:37.815075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.525 #28 NEW cov: 12086 ft: 13303 corp: 9/68b lim: 35 exec/s: 0 rss: 70Mb L: 9/13 MS: 1 InsertByte- 00:06:49.525 [2024-07-12 13:32:37.885343] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000021 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.525 [2024-07-12 13:32:37.885376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.525 #29 NEW cov: 12086 ft: 13329 corp: 10/81b lim: 35 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBinInt- 00:06:49.525 [2024-07-12 13:32:37.955586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.525 [2024-07-12 13:32:37.955616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.525 #33 NEW cov: 12086 ft: 13353 corp: 11/94b lim: 35 exec/s: 0 rss: 70Mb L: 13/13 MS: 4 CrossOver-CopyPart-CrossOver-CrossOver- 00:06:49.525 [2024-07-12 13:32:38.016434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000330 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.525 [2024-07-12 13:32:38.016463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.525 [2024-07-12 13:32:38.016585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.525 [2024-07-12 13:32:38.016603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.525 [2024-07-12 13:32:38.016724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.525 [2024-07-12 13:32:38.016739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.525 #36 NEW cov: 12086 ft: 13802 corp: 12/119b lim: 35 exec/s: 0 rss: 70Mb L: 25/25 MS: 3 EraseBytes-InsertByte-InsertRepeatedBytes- 00:06:49.525 [2024-07-12 13:32:38.076403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.525 [2024-07-12 13:32:38.076433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.525 [2024-07-12 13:32:38.076556] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.525 [2024-07-12 13:32:38.076573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.786 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:49.786 #38 NEW cov: 12109 ft: 14004 corp: 13/134b lim: 35 exec/s: 0 rss: 72Mb L: 15/25 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:49.786 [2024-07-12 13:32:38.136292] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.786 [2024-07-12 13:32:38.136326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.786 #39 NEW cov: 12109 ft: 14021 corp: 14/142b lim: 35 exec/s: 0 rss: 72Mb L: 8/25 MS: 1 InsertByte- 00:06:49.786 [2024-07-12 13:32:38.206418] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.786 [2024-07-12 13:32:38.206448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.786 #40 NEW cov: 12109 ft: 14034 corp: 15/150b lim: 35 exec/s: 40 rss: 72Mb L: 8/25 MS: 1 ChangeByte- 00:06:49.786 [2024-07-12 13:32:38.266988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.786 [2024-07-12 13:32:38.267017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.786 [2024-07-12 13:32:38.267146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.786 [2024-07-12 13:32:38.267163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.786 #51 NEW cov: 12109 ft: 14049 corp: 16/169b lim: 35 exec/s: 51 rss: 72Mb L: 19/25 MS: 1 CopyPart- 00:06:49.786 [2024-07-12 13:32:38.336938] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000121 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.786 [2024-07-12 13:32:38.336966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.047 #52 NEW cov: 12109 ft: 14056 corp: 17/177b lim: 35 exec/s: 52 rss: 72Mb L: 8/25 MS: 1 ChangeBit- 00:06:50.047 [2024-07-12 13:32:38.408097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.408123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.408249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.408264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.408381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.408398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.408525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005af SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.408540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.047 #53 NEW cov: 12109 ft: 14508 corp: 18/209b lim: 35 exec/s: 53 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:50.047 [2024-07-12 13:32:38.488384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000330 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.488413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.488539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.488555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.488673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.488690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.488813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.488830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.047 #54 NEW cov: 12109 ft: 14532 corp: 19/238b lim: 35 exec/s: 54 rss: 72Mb L: 29/32 MS: 1 CopyPart- 00:06:50.047 [2024-07-12 13:32:38.568312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000330 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.568339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.568454] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.568475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.047 [2024-07-12 13:32:38.568595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.568610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.047 #55 NEW cov: 12109 ft: 14542 corp: 20/263b lim: 35 exec/s: 55 rss: 72Mb L: 25/32 MS: 1 ChangeBit- 00:06:50.047 [2024-07-12 13:32:38.627945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007e6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.047 [2024-07-12 13:32:38.627974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.308 #56 NEW cov: 12109 ft: 14603 corp: 21/276b lim: 35 exec/s: 56 rss: 72Mb L: 13/32 MS: 1 ChangeBinInt- 00:06:50.308 [2024-07-12 13:32:38.689026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000330 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.308 [2024-07-12 13:32:38.689052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.308 [2024-07-12 13:32:38.689173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.308 [2024-07-12 13:32:38.689190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.308 [2024-07-12 13:32:38.689310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.308 [2024-07-12 13:32:38.689327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.308 [2024-07-12 13:32:38.689450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.308 [2024-07-12 13:32:38.689466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.308 #57 NEW cov: 12109 ft: 14613 corp: 22/308b lim: 35 exec/s: 57 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:06:50.308 [2024-07-12 13:32:38.768459] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.308 [2024-07-12 13:32:38.768489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.308 #58 NEW cov: 12109 ft: 14623 corp: 23/316b lim: 35 exec/s: 58 rss: 72Mb L: 8/32 MS: 1 ChangeBit- 00:06:50.308 [2024-07-12 13:32:38.838714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.308 [2024-07-12 13:32:38.838743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.308 #59 NEW cov: 12109 ft: 14629 corp: 24/325b lim: 35 exec/s: 59 rss: 72Mb L: 9/32 MS: 1 ShuffleBytes- 00:06:50.569 [2024-07-12 13:32:38.909004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000170 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.569 [2024-07-12 13:32:38.909034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.569 #60 NEW cov: 12109 ft: 14688 corp: 25/333b lim: 35 exec/s: 60 rss: 73Mb L: 8/32 MS: 1 ShuffleBytes- 00:06:50.569 [2024-07-12 13:32:38.969157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000721 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.569 [2024-07-12 13:32:38.969185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.569 #66 NEW cov: 12109 ft: 14699 corp: 26/340b lim: 35 exec/s: 66 rss: 73Mb L: 7/32 MS: 1 ShuffleBytes- 00:06:50.569 [2024-07-12 13:32:39.029360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001ea SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.569 [2024-07-12 13:32:39.029392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.569 #67 NEW cov: 12109 ft: 14712 corp: 27/351b lim: 35 exec/s: 67 rss: 73Mb L: 11/32 MS: 1 EraseBytes- 00:06:50.569 [2024-07-12 13:32:39.100508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000330 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.569 [2024-07-12 13:32:39.100537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.569 [2024-07-12 13:32:39.100656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.569 [2024-07-12 13:32:39.100673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.569 [2024-07-12 13:32:39.100796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.569 [2024-07-12 13:32:39.100814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.569 [2024-07-12 13:32:39.100930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000367 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.569 [2024-07-12 13:32:39.100946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.569 #68 NEW cov: 12109 ft: 14730 corp: 28/383b lim: 35 exec/s: 68 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:06:50.829 [2024-07-12 13:32:39.179901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.829 [2024-07-12 13:32:39.179932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.829 #69 NEW cov: 12109 ft: 14742 corp: 29/392b lim: 35 exec/s: 34 rss: 73Mb L: 9/32 MS: 1 CrossOver- 00:06:50.829 #69 DONE cov: 12109 ft: 14742 corp: 29/392b lim: 35 exec/s: 34 rss: 73Mb 00:06:50.829 Done 69 runs in 2 second(s) 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:50.829 13:32:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:50.829 [2024-07-12 13:32:39.342594] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:50.829 [2024-07-12 13:32:39.342693] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445510 ] 00:06:50.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.090 [2024-07-12 13:32:39.495854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.090 [2024-07-12 13:32:39.547931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.091 [2024-07-12 13:32:39.609371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.091 [2024-07-12 13:32:39.625698] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:51.091 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.091 INFO: Seed: 3919926941 00:06:51.091 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:51.091 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:51.091 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:51.091 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.091 #2 INITED exec/s: 0 rss: 64Mb 00:06:51.091 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:51.091 This may also happen if the target rejected all inputs we tried so far 00:06:51.352 [2024-07-12 13:32:39.684521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.684552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.684591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.684604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.684649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.684661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.352 NEW_FUNC[1/696]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:51.352 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.352 #21 NEW cov: 11969 ft: 11970 corp: 2/81b lim: 105 exec/s: 0 rss: 70Mb L: 80/80 MS: 4 ChangeBit-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:51.352 [2024-07-12 13:32:39.865394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.865461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.865543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.865571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.865656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.865682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.865761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.865788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.352 #22 NEW cov: 12099 ft: 13166 corp: 3/185b lim: 105 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:06:51.352 [2024-07-12 13:32:39.935145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.935175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.935220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.935233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.935279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.935291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.352 [2024-07-12 13:32:39.935335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.352 [2024-07-12 13:32:39.935347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.614 #33 NEW cov: 12105 ft: 13498 corp: 4/289b lim: 105 exec/s: 0 rss: 70Mb L: 104/104 MS: 1 ChangeBinInt- 00:06:51.614 [2024-07-12 13:32:39.995174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:39.995201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:39.995251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:39.995261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:39.995304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:39.995315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.614 #34 NEW cov: 12190 ft: 13799 corp: 5/369b lim: 105 exec/s: 0 rss: 70Mb L: 80/104 MS: 1 ShuffleBytes- 00:06:51.614 [2024-07-12 13:32:40.045526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.045555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.045597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.045607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.045654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.045667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.045708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.045720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.045765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.045778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.614 #35 NEW cov: 12190 ft: 13950 corp: 6/474b lim: 105 exec/s: 0 rss: 70Mb L: 105/105 MS: 1 CrossOver- 00:06:51.614 [2024-07-12 13:32:40.095559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.095588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.095632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.095642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.095686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.095698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.095741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.095754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.614 #36 NEW cov: 12190 ft: 14005 corp: 7/560b lim: 105 exec/s: 0 rss: 70Mb L: 86/105 MS: 1 CopyPart- 00:06:51.614 [2024-07-12 13:32:40.145666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.145693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.145737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.145746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.145788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.145800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.614 [2024-07-12 13:32:40.145845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.614 [2024-07-12 13:32:40.145857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.614 #37 NEW cov: 12190 ft: 14150 corp: 8/648b lim: 105 exec/s: 0 rss: 70Mb L: 88/105 MS: 1 EraseBytes- 00:06:51.875 [2024-07-12 13:32:40.205826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.205853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.205896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.205905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.205948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.205959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.206003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.206015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.875 #38 NEW cov: 12190 ft: 14216 corp: 9/752b lim: 105 exec/s: 0 rss: 70Mb L: 104/105 MS: 1 ChangeByte- 00:06:51.875 [2024-07-12 13:32:40.255880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.255904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.255946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65530 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.255955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.255999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.256011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.875 #39 NEW cov: 12190 ft: 14249 corp: 10/832b lim: 105 exec/s: 0 rss: 70Mb L: 80/105 MS: 1 ChangeBinInt- 00:06:51.875 [2024-07-12 13:32:40.316123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.316149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.316192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.316202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.316248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.316260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.316304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.316316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.875 #40 NEW cov: 12190 ft: 14315 corp: 11/936b lim: 105 exec/s: 0 rss: 72Mb L: 104/105 MS: 1 ShuffleBytes- 00:06:51.875 [2024-07-12 13:32:40.376418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.376444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.376484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.376495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.376531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.376543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.376586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.376598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.875 [2024-07-12 13:32:40.376644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.875 [2024-07-12 13:32:40.376656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.875 #41 NEW cov: 12190 ft: 14331 corp: 12/1041b lim: 105 exec/s: 0 rss: 72Mb L: 105/105 MS: 1 ChangeBit- 00:06:51.876 [2024-07-12 13:32:40.436455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.876 [2024-07-12 13:32:40.436481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.876 [2024-07-12 13:32:40.436523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.876 [2024-07-12 13:32:40.436533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.876 [2024-07-12 13:32:40.436576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.876 [2024-07-12 13:32:40.436589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.876 [2024-07-12 13:32:40.436631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.876 [2024-07-12 13:32:40.436643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.136 #42 NEW cov: 12190 ft: 14356 corp: 13/1145b lim: 105 exec/s: 0 rss: 72Mb L: 104/105 MS: 1 ChangeBit- 00:06:52.136 [2024-07-12 13:32:40.486487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.486514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.486554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:504403158265495551 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.486564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.486609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:49152 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.486622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.136 #43 NEW cov: 12190 ft: 14450 corp: 14/1208b lim: 105 exec/s: 0 rss: 72Mb L: 63/105 MS: 1 EraseBytes- 00:06:52.136 [2024-07-12 13:32:40.546655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.546681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.546723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.546732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.546775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65377 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.546788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.136 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:52.136 #44 NEW cov: 12213 ft: 14533 corp: 15/1286b lim: 105 exec/s: 0 rss: 72Mb L: 78/105 MS: 1 CrossOver- 00:06:52.136 [2024-07-12 13:32:40.606816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.606847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.606883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.606897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.606942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.606954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.136 #45 NEW cov: 12213 ft: 14593 corp: 16/1366b lim: 105 exec/s: 0 rss: 72Mb L: 80/105 MS: 1 ShuffleBytes- 00:06:52.136 [2024-07-12 13:32:40.657025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.657052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.657099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.657108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.657151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.657164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.657208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.657223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.136 #46 NEW cov: 12213 ft: 14610 corp: 17/1470b lim: 105 exec/s: 46 rss: 72Mb L: 104/105 MS: 1 ChangeBit- 00:06:52.136 [2024-07-12 13:32:40.707048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.707076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.707117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.707129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.136 [2024-07-12 13:32:40.707172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.136 [2024-07-12 13:32:40.707185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.397 #47 NEW cov: 12213 ft: 14659 corp: 18/1551b lim: 105 exec/s: 47 rss: 72Mb L: 81/105 MS: 1 InsertByte- 00:06:52.397 [2024-07-12 13:32:40.747355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.747382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.747425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.747435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.747476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.747488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.747529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.747541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.747586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:49152 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.747598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:52.397 #48 NEW cov: 12213 ft: 14682 corp: 19/1656b lim: 105 exec/s: 48 rss: 72Mb L: 105/105 MS: 1 CrossOver- 00:06:52.397 [2024-07-12 13:32:40.797269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.797295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.797339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.797348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.797392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:49152 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.797408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.397 #49 NEW cov: 12213 ft: 14689 corp: 20/1719b lim: 105 exec/s: 49 rss: 72Mb L: 63/105 MS: 1 CrossOver- 00:06:52.397 [2024-07-12 13:32:40.857561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.857588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.857630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:504403158265495551 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.857640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.857679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.857692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.857733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744071461404672 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.857745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.397 #50 NEW cov: 12213 ft: 14703 corp: 21/1814b lim: 105 exec/s: 50 rss: 72Mb L: 95/105 MS: 1 CrossOver- 00:06:52.397 [2024-07-12 13:32:40.907582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.907609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.907651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65530 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.907661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.907704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.907716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.397 #51 NEW cov: 12213 ft: 14736 corp: 22/1894b lim: 105 exec/s: 51 rss: 72Mb L: 80/105 MS: 1 ShuffleBytes- 00:06:52.397 [2024-07-12 13:32:40.967747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.967773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.967818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65530 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.967828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.397 [2024-07-12 13:32:40.967869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.397 [2024-07-12 13:32:40.967881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.657 #52 NEW cov: 12213 ft: 14847 corp: 23/1974b lim: 105 exec/s: 52 rss: 72Mb L: 80/105 MS: 1 ChangeByte- 00:06:52.657 [2024-07-12 13:32:41.028079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.028107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.028149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.028159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.028200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.028212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.028263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.028276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.028320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446502181151440895 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.028332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:52.657 #53 NEW cov: 12213 ft: 14869 corp: 24/2079b lim: 105 exec/s: 53 rss: 72Mb L: 105/105 MS: 1 ChangeByte- 00:06:52.657 [2024-07-12 13:32:41.088056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.088083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.088126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.088135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.088179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.088192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.657 #64 NEW cov: 12213 ft: 14915 corp: 25/2151b lim: 105 exec/s: 64 rss: 72Mb L: 72/105 MS: 1 EraseBytes- 00:06:52.657 [2024-07-12 13:32:41.148348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.148374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.148415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.148425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.148466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.148479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.148522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.148537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.657 #65 NEW cov: 12213 ft: 14924 corp: 26/2255b lim: 105 exec/s: 65 rss: 72Mb L: 104/105 MS: 1 ChangeBit- 00:06:52.657 [2024-07-12 13:32:41.198475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.198502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.198547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:504403158265495551 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.198557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.198598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.198609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.657 [2024-07-12 13:32:41.198652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744071461404672 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.657 [2024-07-12 13:32:41.198663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.916 #66 NEW cov: 12213 ft: 14931 corp: 27/2350b lim: 105 exec/s: 66 rss: 73Mb L: 95/105 MS: 1 CMP- DE: "\377\007"- 00:06:52.916 [2024-07-12 13:32:41.258716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.258743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.258786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.258796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.258837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.258850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.258893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.258905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.258950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446502181151440895 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.258960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:52.916 #67 NEW cov: 12213 ft: 14986 corp: 28/2455b lim: 105 exec/s: 67 rss: 73Mb L: 105/105 MS: 1 ShuffleBytes- 00:06:52.916 [2024-07-12 13:32:41.318869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.318896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.318937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18374690882013101824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.318950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.318993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.319005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.319047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.319058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.319101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446502181151440895 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.319112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:52.916 #68 NEW cov: 12213 ft: 14999 corp: 29/2560b lim: 105 exec/s: 68 rss: 73Mb L: 105/105 MS: 1 CopyPart- 00:06:52.916 [2024-07-12 13:32:41.378946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.378972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.379016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:504403158265495551 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.379026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.379066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.379078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.379122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446462603027808255 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.379134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.916 #69 NEW cov: 12213 ft: 15027 corp: 30/2655b lim: 105 exec/s: 69 rss: 73Mb L: 95/105 MS: 1 CopyPart- 00:06:52.916 [2024-07-12 13:32:41.428924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.428950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.428994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65530 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.429003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.429046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.429058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.916 #70 NEW cov: 12213 ft: 15036 corp: 31/2736b lim: 105 exec/s: 70 rss: 73Mb L: 81/105 MS: 1 InsertByte- 00:06:52.916 [2024-07-12 13:32:41.478983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.479012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.916 [2024-07-12 13:32:41.479046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.916 [2024-07-12 13:32:41.479059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.176 #71 NEW cov: 12213 ft: 15339 corp: 32/2789b lim: 105 exec/s: 71 rss: 73Mb L: 53/105 MS: 1 EraseBytes- 00:06:53.176 [2024-07-12 13:32:41.539224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.539255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.539300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.539309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.539352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65377 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.539364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.176 #72 NEW cov: 12213 ft: 15342 corp: 33/2867b lim: 105 exec/s: 72 rss: 73Mb L: 78/105 MS: 1 ChangeByte- 00:06:53.176 [2024-07-12 13:32:41.599490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.599516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.599559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.599569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.599609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.599621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.599663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551366 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.599676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.176 #73 NEW cov: 12213 ft: 15369 corp: 34/2971b lim: 105 exec/s: 73 rss: 73Mb L: 104/105 MS: 1 ChangeBinInt- 00:06:53.176 [2024-07-12 13:32:41.649703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071041974271 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.649729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.649771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65281 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.649781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.649821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.649839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.649881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.649892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.176 [2024-07-12 13:32:41.649934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16711680 len:256 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.176 [2024-07-12 13:32:41.649945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:53.176 #74 NEW cov: 12213 ft: 15406 corp: 35/3076b lim: 105 exec/s: 37 rss: 73Mb L: 105/105 MS: 1 CopyPart- 00:06:53.176 #74 DONE cov: 12213 ft: 15406 corp: 35/3076b lim: 105 exec/s: 37 rss: 73Mb 00:06:53.176 ###### Recommended dictionary. ###### 00:06:53.176 "\377\007" # Uses: 0 00:06:53.176 ###### End of recommended dictionary. ###### 00:06:53.176 Done 74 runs in 2 second(s) 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.435 13:32:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:53.435 [2024-07-12 13:32:41.812538] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:53.435 [2024-07-12 13:32:41.812642] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445951 ] 00:06:53.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.435 [2024-07-12 13:32:41.964182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.435 [2024-07-12 13:32:42.016436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.695 [2024-07-12 13:32:42.077971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.695 [2024-07-12 13:32:42.094274] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:53.695 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.695 INFO: Seed: 2093951101 00:06:53.695 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:53.695 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:53.695 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:53.695 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.695 #2 INITED exec/s: 0 rss: 64Mb 00:06:53.695 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.695 This may also happen if the target rejected all inputs we tried so far 00:06:53.695 [2024-07-12 13:32:42.149663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.695 [2024-07-12 13:32:42.149703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.695 [2024-07-12 13:32:42.149738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.695 [2024-07-12 13:32:42.149754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.695 [2024-07-12 13:32:42.149800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.695 [2024-07-12 13:32:42.149813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.695 [2024-07-12 13:32:42.149854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.695 [2024-07-12 13:32:42.149866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.955 NEW_FUNC[1/697]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:53.955 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:53.955 #16 NEW cov: 11990 ft: 11990 corp: 2/110b lim: 120 exec/s: 0 rss: 70Mb L: 109/109 MS: 4 ChangeBinInt-CrossOver-EraseBytes-InsertRepeatedBytes- 00:06:53.955 [2024-07-12 13:32:42.329956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.330003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.330061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.330080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.955 #17 NEW cov: 12120 ft: 13057 corp: 3/172b lim: 120 exec/s: 0 rss: 70Mb L: 62/109 MS: 1 CrossOver- 00:06:53.955 [2024-07-12 13:32:42.399964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.399995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.400030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.400043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.955 #18 NEW cov: 12126 ft: 13238 corp: 4/234b lim: 120 exec/s: 0 rss: 70Mb L: 62/109 MS: 1 ChangeBinInt- 00:06:53.955 [2024-07-12 13:32:42.460388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.460416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.460463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.460473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.460514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.460527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.460569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.460581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.955 #19 NEW cov: 12211 ft: 13533 corp: 5/343b lim: 120 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 ChangeByte- 00:06:53.955 [2024-07-12 13:32:42.510512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.510539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.510581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.510591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.510629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.510640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.955 [2024-07-12 13:32:42.510682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.955 [2024-07-12 13:32:42.510694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.216 #20 NEW cov: 12211 ft: 13617 corp: 6/452b lim: 120 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 ChangeBit- 00:06:54.216 [2024-07-12 13:32:42.570535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.570562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.570606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.570615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.570659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.570671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.216 #21 NEW cov: 12211 ft: 13938 corp: 7/532b lim: 120 exec/s: 0 rss: 70Mb L: 80/109 MS: 1 CrossOver- 00:06:54.216 [2024-07-12 13:32:42.620624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.620656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.620700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.620710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.620753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.620765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.216 #22 NEW cov: 12211 ft: 14055 corp: 8/615b lim: 120 exec/s: 0 rss: 70Mb L: 83/109 MS: 1 CrossOver- 00:06:54.216 [2024-07-12 13:32:42.680914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.680941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.680983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:62720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.680993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.681029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:2561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.681041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.681084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.681096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.216 #23 NEW cov: 12211 ft: 14135 corp: 9/724b lim: 120 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 CrossOver- 00:06:54.216 [2024-07-12 13:32:42.731065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.731093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.731136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.731146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.731190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.731202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.731246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:253403070464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.731258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.216 #24 NEW cov: 12211 ft: 14148 corp: 10/833b lim: 120 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 ChangeByte- 00:06:54.216 [2024-07-12 13:32:42.781173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.781200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.781253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:15 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.781263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.781304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1012762419733073422 len:3599 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.781317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.216 [2024-07-12 13:32:42.781358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1012762419733073422 len:3585 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.216 [2024-07-12 13:32:42.781370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.477 #25 NEW cov: 12211 ft: 14214 corp: 11/943b lim: 120 exec/s: 0 rss: 70Mb L: 110/110 MS: 1 InsertRepeatedBytes- 00:06:54.477 [2024-07-12 13:32:42.841039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.841066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.841109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.841118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.477 #26 NEW cov: 12211 ft: 14260 corp: 12/999b lim: 120 exec/s: 0 rss: 70Mb L: 56/110 MS: 1 EraseBytes- 00:06:54.477 [2024-07-12 13:32:42.891504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.891531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.891573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.891583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.891623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.891635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.891678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.891689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.477 #27 NEW cov: 12211 ft: 14289 corp: 13/1108b lim: 120 exec/s: 0 rss: 70Mb L: 109/110 MS: 1 CrossOver- 00:06:54.477 [2024-07-12 13:32:42.931321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.931348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.931386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.931398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.477 #28 NEW cov: 12211 ft: 14308 corp: 14/1164b lim: 120 exec/s: 0 rss: 72Mb L: 56/110 MS: 1 ShuffleBytes- 00:06:54.477 [2024-07-12 13:32:42.991738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.991765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.991804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.991815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.991847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.991860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:42.991903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:42.991915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.477 #29 NEW cov: 12211 ft: 14325 corp: 15/1273b lim: 120 exec/s: 0 rss: 72Mb L: 109/110 MS: 1 ChangeByte- 00:06:54.477 [2024-07-12 13:32:43.031863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:43.031889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:43.031934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:43.031944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:43.031986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:43.031998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.477 [2024-07-12 13:32:43.032037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.477 [2024-07-12 13:32:43.032049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.736 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:54.736 #30 NEW cov: 12234 ft: 14350 corp: 16/1391b lim: 120 exec/s: 0 rss: 72Mb L: 118/118 MS: 1 InsertRepeatedBytes- 00:06:54.736 [2024-07-12 13:32:43.092026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.736 [2024-07-12 13:32:43.092056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.736 [2024-07-12 13:32:43.092100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.736 [2024-07-12 13:32:43.092110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.736 [2024-07-12 13:32:43.092152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.736 [2024-07-12 13:32:43.092163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.736 [2024-07-12 13:32:43.092209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.736 [2024-07-12 13:32:43.092224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.736 #31 NEW cov: 12234 ft: 14370 corp: 17/1500b lim: 120 exec/s: 0 rss: 72Mb L: 109/118 MS: 1 ChangeByte- 00:06:54.736 [2024-07-12 13:32:43.142002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.736 [2024-07-12 13:32:43.142028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.736 [2024-07-12 13:32:43.142075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.142085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.142127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.142139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.737 #32 NEW cov: 12234 ft: 14410 corp: 18/1590b lim: 120 exec/s: 32 rss: 72Mb L: 90/118 MS: 1 InsertRepeatedBytes- 00:06:54.737 [2024-07-12 13:32:43.202069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.202096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.202130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.202144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.737 #33 NEW cov: 12234 ft: 14418 corp: 19/1646b lim: 120 exec/s: 33 rss: 72Mb L: 56/118 MS: 1 ChangeBit- 00:06:54.737 [2024-07-12 13:32:43.252426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.252453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.252493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.252504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.252533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.252545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.252588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.252599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.737 #34 NEW cov: 12234 ft: 14426 corp: 20/1764b lim: 120 exec/s: 34 rss: 72Mb L: 118/118 MS: 1 ChangeByte- 00:06:54.737 [2024-07-12 13:32:43.312632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.312658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.312700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.312713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.312757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.312768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.737 [2024-07-12 13:32:43.312811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.737 [2024-07-12 13:32:43.312823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.996 #35 NEW cov: 12234 ft: 14488 corp: 21/1873b lim: 120 exec/s: 35 rss: 72Mb L: 109/118 MS: 1 CopyPart- 00:06:54.996 [2024-07-12 13:32:43.352709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.352736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.352778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.352788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.352830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.352841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.352885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.352896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.996 #36 NEW cov: 12234 ft: 14503 corp: 22/1987b lim: 120 exec/s: 36 rss: 72Mb L: 114/118 MS: 1 InsertRepeatedBytes- 00:06:54.996 [2024-07-12 13:32:43.412890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.412916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.412957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.412967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.413004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.413015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.413060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.413073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.996 #37 NEW cov: 12234 ft: 14537 corp: 23/2096b lim: 120 exec/s: 37 rss: 72Mb L: 109/118 MS: 1 ChangeBit- 00:06:54.996 [2024-07-12 13:32:43.452984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.453010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.453055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.453065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.453106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.453118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.453162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.453173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.996 #38 NEW cov: 12234 ft: 14549 corp: 24/2214b lim: 120 exec/s: 38 rss: 72Mb L: 118/118 MS: 1 CrossOver- 00:06:54.996 [2024-07-12 13:32:43.493089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.493114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.493151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.493162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.493201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.493214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.493256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.493269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.996 #39 NEW cov: 12234 ft: 14560 corp: 25/2324b lim: 120 exec/s: 39 rss: 72Mb L: 110/118 MS: 1 InsertByte- 00:06:54.996 [2024-07-12 13:32:43.533196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.533222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.533269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:62452 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.533279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.533319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.533331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.996 [2024-07-12 13:32:43.533376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.996 [2024-07-12 13:32:43.533387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.996 #40 NEW cov: 12234 ft: 14563 corp: 26/2442b lim: 120 exec/s: 40 rss: 72Mb L: 118/118 MS: 1 InsertRepeatedBytes- 00:06:55.255 [2024-07-12 13:32:43.592977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.593008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.255 #41 NEW cov: 12234 ft: 15386 corp: 27/2486b lim: 120 exec/s: 41 rss: 72Mb L: 44/118 MS: 1 EraseBytes- 00:06:55.255 [2024-07-12 13:32:43.653412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.653440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.653482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:524288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.653491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.653529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.653541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.255 #42 NEW cov: 12234 ft: 15398 corp: 28/2569b lim: 120 exec/s: 42 rss: 72Mb L: 83/118 MS: 1 ChangeBit- 00:06:55.255 [2024-07-12 13:32:43.703682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.703709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.703752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.703762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.703804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.703816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.703859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.703872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.255 #43 NEW cov: 12234 ft: 15434 corp: 29/2683b lim: 120 exec/s: 43 rss: 72Mb L: 114/118 MS: 1 ShuffleBytes- 00:06:55.255 [2024-07-12 13:32:43.753761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.753787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.753828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:15 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.753840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.753869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1012762419733073422 len:3599 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.753881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.753923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1012762419733073422 len:3585 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.753935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.255 #44 NEW cov: 12234 ft: 15447 corp: 30/2793b lim: 120 exec/s: 44 rss: 72Mb L: 110/118 MS: 1 ShuffleBytes- 00:06:55.255 [2024-07-12 13:32:43.813976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.814002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.814041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.814051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.814087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.814099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.255 [2024-07-12 13:32:43.814142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.255 [2024-07-12 13:32:43.814154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.515 #45 NEW cov: 12234 ft: 15470 corp: 31/2910b lim: 120 exec/s: 45 rss: 72Mb L: 117/118 MS: 1 CopyPart- 00:06:55.515 [2024-07-12 13:32:43.854034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.854062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.854106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.854115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.854158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:524288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.854170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.854214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.854226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.515 #46 NEW cov: 12234 ft: 15477 corp: 32/3019b lim: 120 exec/s: 46 rss: 72Mb L: 109/118 MS: 1 ChangeBit- 00:06:55.515 [2024-07-12 13:32:43.914209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.914242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.914283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:50 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.914294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.914331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.914343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.914388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.914403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.515 #47 NEW cov: 12234 ft: 15507 corp: 33/3130b lim: 120 exec/s: 47 rss: 73Mb L: 111/118 MS: 1 InsertByte- 00:06:55.515 [2024-07-12 13:32:43.974384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.974412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.974454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.974465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.974507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.974521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.515 [2024-07-12 13:32:43.974564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.515 [2024-07-12 13:32:43.974575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.515 #48 NEW cov: 12234 ft: 15526 corp: 34/3240b lim: 120 exec/s: 48 rss: 73Mb L: 110/118 MS: 1 InsertByte- 00:06:55.516 [2024-07-12 13:32:44.034403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.516 [2024-07-12 13:32:44.034430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.516 [2024-07-12 13:32:44.034473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:269380348805120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.516 [2024-07-12 13:32:44.034483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.516 [2024-07-12 13:32:44.034527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.516 [2024-07-12 13:32:44.034540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.516 #49 NEW cov: 12234 ft: 15577 corp: 35/3324b lim: 120 exec/s: 49 rss: 73Mb L: 84/118 MS: 1 CrossOver- 00:06:55.516 [2024-07-12 13:32:44.094688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4110417920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.516 [2024-07-12 13:32:44.094714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.516 [2024-07-12 13:32:44.094756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:50 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.516 [2024-07-12 13:32:44.094766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.516 [2024-07-12 13:32:44.094808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.516 [2024-07-12 13:32:44.094820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.516 [2024-07-12 13:32:44.094862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.516 [2024-07-12 13:32:44.094873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.776 #50 NEW cov: 12234 ft: 15579 corp: 36/3441b lim: 120 exec/s: 25 rss: 73Mb L: 117/118 MS: 1 CrossOver- 00:06:55.776 #50 DONE cov: 12234 ft: 15579 corp: 36/3441b lim: 120 exec/s: 25 rss: 73Mb 00:06:55.776 Done 50 runs in 2 second(s) 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.776 13:32:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:55.776 [2024-07-12 13:32:44.272695] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:55.776 [2024-07-12 13:32:44.272790] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446469 ] 00:06:55.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.036 [2024-07-12 13:32:44.441093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.036 [2024-07-12 13:32:44.498087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.036 [2024-07-12 13:32:44.559738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.036 [2024-07-12 13:32:44.576081] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:56.036 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.036 INFO: Seed: 281985898 00:06:56.036 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:56.036 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:56.036 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:56.036 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.036 #2 INITED exec/s: 0 rss: 64Mb 00:06:56.036 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.036 This may also happen if the target rejected all inputs we tried so far 00:06:56.295 [2024-07-12 13:32:44.630985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.295 [2024-07-12 13:32:44.631018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.295 [2024-07-12 13:32:44.631041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.295 [2024-07-12 13:32:44.631051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.295 NEW_FUNC[1/695]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:56.295 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.295 #21 NEW cov: 11932 ft: 11934 corp: 2/53b lim: 100 exec/s: 0 rss: 70Mb L: 52/52 MS: 4 ChangeByte-ChangeBit-CMP-InsertRepeatedBytes- DE: "\177\000\000\000"- 00:06:56.295 [2024-07-12 13:32:44.811764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.295 [2024-07-12 13:32:44.811818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.295 [2024-07-12 13:32:44.811883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.295 [2024-07-12 13:32:44.811904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.295 [2024-07-12 13:32:44.811967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:56.295 [2024-07-12 13:32:44.811987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.295 #27 NEW cov: 12063 ft: 12942 corp: 3/127b lim: 100 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 CopyPart- 00:06:56.557 [2024-07-12 13:32:44.881573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.557 [2024-07-12 13:32:44.881601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:44.881634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.557 [2024-07-12 13:32:44.881644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.557 #28 NEW cov: 12069 ft: 13217 corp: 4/186b lim: 100 exec/s: 0 rss: 70Mb L: 59/74 MS: 1 InsertRepeatedBytes- 00:06:56.557 [2024-07-12 13:32:44.921777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.557 [2024-07-12 13:32:44.921800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:44.921838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.557 [2024-07-12 13:32:44.921847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:44.921883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:56.557 [2024-07-12 13:32:44.921895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.557 #29 NEW cov: 12154 ft: 13440 corp: 5/260b lim: 100 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 ChangeByte- 00:06:56.557 [2024-07-12 13:32:44.981928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.557 [2024-07-12 13:32:44.981953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:44.981994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.557 [2024-07-12 13:32:44.982003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:44.982048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:56.557 [2024-07-12 13:32:44.982059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.557 #35 NEW cov: 12154 ft: 13487 corp: 6/335b lim: 100 exec/s: 0 rss: 70Mb L: 75/75 MS: 1 InsertByte- 00:06:56.557 [2024-07-12 13:32:45.042000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.557 [2024-07-12 13:32:45.042024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:45.042059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.557 [2024-07-12 13:32:45.042070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.557 #36 NEW cov: 12154 ft: 13539 corp: 7/384b lim: 100 exec/s: 0 rss: 70Mb L: 49/75 MS: 1 EraseBytes- 00:06:56.557 [2024-07-12 13:32:45.092214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.557 [2024-07-12 13:32:45.092243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:45.092285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.557 [2024-07-12 13:32:45.092294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.557 [2024-07-12 13:32:45.092335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:56.557 [2024-07-12 13:32:45.092346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.557 #37 NEW cov: 12154 ft: 13614 corp: 8/444b lim: 100 exec/s: 0 rss: 70Mb L: 60/75 MS: 1 InsertByte- 00:06:56.882 [2024-07-12 13:32:45.152286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.882 [2024-07-12 13:32:45.152310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.152349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.882 [2024-07-12 13:32:45.152358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.882 #38 NEW cov: 12154 ft: 13675 corp: 9/493b lim: 100 exec/s: 0 rss: 70Mb L: 49/75 MS: 1 ChangeBit- 00:06:56.882 [2024-07-12 13:32:45.212457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.882 [2024-07-12 13:32:45.212480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.212521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.882 [2024-07-12 13:32:45.212529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.882 #39 NEW cov: 12154 ft: 13707 corp: 10/545b lim: 100 exec/s: 0 rss: 70Mb L: 52/75 MS: 1 ChangeByte- 00:06:56.882 [2024-07-12 13:32:45.262561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.882 [2024-07-12 13:32:45.262585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.262625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.882 [2024-07-12 13:32:45.262633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.882 #40 NEW cov: 12154 ft: 13828 corp: 11/594b lim: 100 exec/s: 0 rss: 70Mb L: 49/75 MS: 1 ChangeByte- 00:06:56.882 [2024-07-12 13:32:45.322730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.882 [2024-07-12 13:32:45.322754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.322790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.882 [2024-07-12 13:32:45.322801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.882 #41 NEW cov: 12154 ft: 13848 corp: 12/653b lim: 100 exec/s: 0 rss: 72Mb L: 59/75 MS: 1 ChangeBinInt- 00:06:56.882 [2024-07-12 13:32:45.372952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.882 [2024-07-12 13:32:45.372977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.373017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.882 [2024-07-12 13:32:45.373026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.373069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:56.882 [2024-07-12 13:32:45.373081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.882 #42 NEW cov: 12154 ft: 13864 corp: 13/713b lim: 100 exec/s: 0 rss: 72Mb L: 60/75 MS: 1 ChangeByte- 00:06:56.882 [2024-07-12 13:32:45.433097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:56.882 [2024-07-12 13:32:45.433122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.433163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:56.882 [2024-07-12 13:32:45.433172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.882 [2024-07-12 13:32:45.433213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:56.882 [2024-07-12 13:32:45.433223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.158 #43 NEW cov: 12154 ft: 13913 corp: 14/773b lim: 100 exec/s: 0 rss: 72Mb L: 60/75 MS: 1 ChangeBit- 00:06:57.158 [2024-07-12 13:32:45.483343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.158 [2024-07-12 13:32:45.483367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.158 [2024-07-12 13:32:45.483406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.158 [2024-07-12 13:32:45.483415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.158 [2024-07-12 13:32:45.483450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.158 [2024-07-12 13:32:45.483461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.158 [2024-07-12 13:32:45.483501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:57.158 [2024-07-12 13:32:45.483512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.158 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:57.158 #44 NEW cov: 12177 ft: 14244 corp: 15/860b lim: 100 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:06:57.159 [2024-07-12 13:32:45.543399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.159 [2024-07-12 13:32:45.543430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.159 [2024-07-12 13:32:45.543464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.159 [2024-07-12 13:32:45.543475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.159 [2024-07-12 13:32:45.543515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.159 [2024-07-12 13:32:45.543526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.159 #45 NEW cov: 12177 ft: 14274 corp: 16/920b lim: 100 exec/s: 0 rss: 72Mb L: 60/87 MS: 1 InsertByte- 00:06:57.159 [2024-07-12 13:32:45.593509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.159 [2024-07-12 13:32:45.593535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.159 [2024-07-12 13:32:45.593570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.159 [2024-07-12 13:32:45.593582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.159 [2024-07-12 13:32:45.593624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.159 [2024-07-12 13:32:45.593636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.159 #46 NEW cov: 12177 ft: 14299 corp: 17/994b lim: 100 exec/s: 46 rss: 72Mb L: 74/87 MS: 1 CrossOver- 00:06:57.159 [2024-07-12 13:32:45.633620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.159 [2024-07-12 13:32:45.633646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.159 [2024-07-12 13:32:45.633682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.159 [2024-07-12 13:32:45.633691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.159 [2024-07-12 13:32:45.633731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.159 [2024-07-12 13:32:45.633743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.159 #47 NEW cov: 12177 ft: 14325 corp: 18/1054b lim: 100 exec/s: 47 rss: 72Mb L: 60/87 MS: 1 ChangeByte- 00:06:57.159 [2024-07-12 13:32:45.693683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.159 [2024-07-12 13:32:45.693708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.159 [2024-07-12 13:32:45.693748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.159 [2024-07-12 13:32:45.693756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.159 #48 NEW cov: 12177 ft: 14340 corp: 19/1111b lim: 100 exec/s: 48 rss: 72Mb L: 57/87 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:57.419 [2024-07-12 13:32:45.753894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.419 [2024-07-12 13:32:45.753918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.753962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.419 [2024-07-12 13:32:45.753970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.754015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.419 [2024-07-12 13:32:45.754026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.419 #49 NEW cov: 12177 ft: 14352 corp: 20/1171b lim: 100 exec/s: 49 rss: 72Mb L: 60/87 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:57.419 [2024-07-12 13:32:45.814090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.419 [2024-07-12 13:32:45.814115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.814155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.419 [2024-07-12 13:32:45.814165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.814206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.419 [2024-07-12 13:32:45.814218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.419 #50 NEW cov: 12177 ft: 14389 corp: 21/1231b lim: 100 exec/s: 50 rss: 72Mb L: 60/87 MS: 1 ChangeByte- 00:06:57.419 [2024-07-12 13:32:45.874258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.419 [2024-07-12 13:32:45.874282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.874320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.419 [2024-07-12 13:32:45.874330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.874364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.419 [2024-07-12 13:32:45.874375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.419 #51 NEW cov: 12177 ft: 14441 corp: 22/1292b lim: 100 exec/s: 51 rss: 72Mb L: 61/87 MS: 1 InsertByte- 00:06:57.419 [2024-07-12 13:32:45.934400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.419 [2024-07-12 13:32:45.934425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.934468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.419 [2024-07-12 13:32:45.934477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.934518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.419 [2024-07-12 13:32:45.934530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.419 #52 NEW cov: 12177 ft: 14442 corp: 23/1352b lim: 100 exec/s: 52 rss: 72Mb L: 60/87 MS: 1 CopyPart- 00:06:57.419 [2024-07-12 13:32:45.974501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.419 [2024-07-12 13:32:45.974524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.974564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.419 [2024-07-12 13:32:45.974573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.419 [2024-07-12 13:32:45.974612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.419 [2024-07-12 13:32:45.974623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.678 #53 NEW cov: 12177 ft: 14482 corp: 24/1412b lim: 100 exec/s: 53 rss: 73Mb L: 60/87 MS: 1 ChangeBinInt- 00:06:57.678 [2024-07-12 13:32:46.034776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.678 [2024-07-12 13:32:46.034801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.678 [2024-07-12 13:32:46.034840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.678 [2024-07-12 13:32:46.034849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.678 [2024-07-12 13:32:46.034883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.678 [2024-07-12 13:32:46.034894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.678 [2024-07-12 13:32:46.034937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:57.678 [2024-07-12 13:32:46.034948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.678 #54 NEW cov: 12177 ft: 14502 corp: 25/1499b lim: 100 exec/s: 54 rss: 73Mb L: 87/87 MS: 1 ChangeByte- 00:06:57.678 [2024-07-12 13:32:46.094738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.678 [2024-07-12 13:32:46.094763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.678 [2024-07-12 13:32:46.094800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.678 [2024-07-12 13:32:46.094811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.678 #55 NEW cov: 12177 ft: 14514 corp: 26/1548b lim: 100 exec/s: 55 rss: 73Mb L: 49/87 MS: 1 PersAutoDict- DE: "\177\000\000\000"- 00:06:57.678 [2024-07-12 13:32:46.144922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.678 [2024-07-12 13:32:46.144946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.678 [2024-07-12 13:32:46.144986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.678 [2024-07-12 13:32:46.144995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.678 [2024-07-12 13:32:46.145035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.679 [2024-07-12 13:32:46.145047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.679 #56 NEW cov: 12177 ft: 14519 corp: 27/1611b lim: 100 exec/s: 56 rss: 73Mb L: 63/87 MS: 1 CopyPart- 00:06:57.679 [2024-07-12 13:32:46.184996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.679 [2024-07-12 13:32:46.185019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.679 [2024-07-12 13:32:46.185060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.679 [2024-07-12 13:32:46.185068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.679 [2024-07-12 13:32:46.185109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.679 [2024-07-12 13:32:46.185121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.679 #57 NEW cov: 12177 ft: 14526 corp: 28/1679b lim: 100 exec/s: 57 rss: 73Mb L: 68/87 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:57.679 [2024-07-12 13:32:46.235032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.679 [2024-07-12 13:32:46.235056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.679 [2024-07-12 13:32:46.235096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.679 [2024-07-12 13:32:46.235104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.939 #58 NEW cov: 12177 ft: 14533 corp: 29/1728b lim: 100 exec/s: 58 rss: 73Mb L: 49/87 MS: 1 ShuffleBytes- 00:06:57.939 [2024-07-12 13:32:46.295245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.939 [2024-07-12 13:32:46.295268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.939 [2024-07-12 13:32:46.295302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.939 [2024-07-12 13:32:46.295314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.939 #59 NEW cov: 12177 ft: 14570 corp: 30/1785b lim: 100 exec/s: 59 rss: 73Mb L: 57/87 MS: 1 ChangeByte- 00:06:57.939 [2024-07-12 13:32:46.355404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.939 [2024-07-12 13:32:46.355427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.939 [2024-07-12 13:32:46.355463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.939 [2024-07-12 13:32:46.355473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.939 #60 NEW cov: 12177 ft: 14604 corp: 31/1834b lim: 100 exec/s: 60 rss: 73Mb L: 49/87 MS: 1 ChangeByte- 00:06:57.939 [2024-07-12 13:32:46.395677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.939 [2024-07-12 13:32:46.395701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.939 [2024-07-12 13:32:46.395740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.939 [2024-07-12 13:32:46.395749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.939 [2024-07-12 13:32:46.395785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.939 [2024-07-12 13:32:46.395795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.939 [2024-07-12 13:32:46.395834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:57.939 [2024-07-12 13:32:46.395845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.939 #61 NEW cov: 12177 ft: 14609 corp: 32/1917b lim: 100 exec/s: 61 rss: 73Mb L: 83/87 MS: 1 CopyPart- 00:06:57.939 [2024-07-12 13:32:46.445794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.940 [2024-07-12 13:32:46.445817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.940 [2024-07-12 13:32:46.445854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.940 [2024-07-12 13:32:46.445865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.940 [2024-07-12 13:32:46.445893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:57.940 [2024-07-12 13:32:46.445904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.940 [2024-07-12 13:32:46.445950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:57.940 [2024-07-12 13:32:46.445961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.940 #62 NEW cov: 12177 ft: 14615 corp: 33/2004b lim: 100 exec/s: 62 rss: 73Mb L: 87/87 MS: 1 ShuffleBytes- 00:06:57.940 [2024-07-12 13:32:46.505801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:57.940 [2024-07-12 13:32:46.505825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.940 [2024-07-12 13:32:46.505866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:57.940 [2024-07-12 13:32:46.505874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.200 #63 NEW cov: 12177 ft: 14626 corp: 34/2053b lim: 100 exec/s: 63 rss: 73Mb L: 49/87 MS: 1 CrossOver- 00:06:58.200 [2024-07-12 13:32:46.566023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.200 [2024-07-12 13:32:46.566048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.200 [2024-07-12 13:32:46.566086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.200 [2024-07-12 13:32:46.566096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.200 [2024-07-12 13:32:46.566137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.200 [2024-07-12 13:32:46.566148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.200 #64 NEW cov: 12177 ft: 14636 corp: 35/2117b lim: 100 exec/s: 64 rss: 73Mb L: 64/87 MS: 1 CrossOver- 00:06:58.200 [2024-07-12 13:32:46.616144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.200 [2024-07-12 13:32:46.616168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.200 [2024-07-12 13:32:46.616209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.200 [2024-07-12 13:32:46.616218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.200 [2024-07-12 13:32:46.616262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.200 [2024-07-12 13:32:46.616274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.200 #65 NEW cov: 12177 ft: 14652 corp: 36/2192b lim: 100 exec/s: 32 rss: 73Mb L: 75/87 MS: 1 InsertByte- 00:06:58.200 #65 DONE cov: 12177 ft: 14652 corp: 36/2192b lim: 100 exec/s: 32 rss: 73Mb 00:06:58.200 ###### Recommended dictionary. ###### 00:06:58.200 "\177\000\000\000" # Uses: 1 00:06:58.201 "\001\000\000\000\000\000\000\000" # Uses: 2 00:06:58.201 ###### End of recommended dictionary. ###### 00:06:58.201 Done 65 runs in 2 second(s) 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.201 13:32:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:58.201 [2024-07-12 13:32:46.774927] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:06:58.201 [2024-07-12 13:32:46.775006] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446976 ] 00:06:58.460 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.460 [2024-07-12 13:32:46.937909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.460 [2024-07-12 13:32:46.996585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.720 [2024-07-12 13:32:47.058810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.720 [2024-07-12 13:32:47.075145] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:58.720 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.720 INFO: Seed: 2778990036 00:06:58.720 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:06:58.720 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:06:58.720 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:58.720 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.720 #2 INITED exec/s: 0 rss: 64Mb 00:06:58.720 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.720 This may also happen if the target rejected all inputs we tried so far 00:06:58.720 [2024-07-12 13:32:47.123815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:58.720 [2024-07-12 13:32:47.123856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.720 [2024-07-12 13:32:47.123892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:58.720 [2024-07-12 13:32:47.123910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.720 [2024-07-12 13:32:47.123965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:58.720 [2024-07-12 13:32:47.123983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.720 [2024-07-12 13:32:47.124039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:58.720 [2024-07-12 13:32:47.124061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.720 NEW_FUNC[1/694]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:06:58.720 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:58.720 #6 NEW cov: 11904 ft: 11912 corp: 2/47b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 4 CopyPart-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:06:58.981 [2024-07-12 13:32:47.304291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:58.981 [2024-07-12 13:32:47.304353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.304423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.304447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.304512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.304535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.304604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.304627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.981 NEW_FUNC[1/1]: 0x133fc60 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:06:58.981 #7 NEW cov: 12041 ft: 12576 corp: 3/93b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 ChangeBinInt- 00:06:58.981 [2024-07-12 13:32:47.374144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:58.981 [2024-07-12 13:32:47.374174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.374216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.374225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.374270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.374283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.374325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.374337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.981 #8 NEW cov: 12047 ft: 12780 corp: 4/139b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 ShuffleBytes- 00:06:58.981 [2024-07-12 13:32:47.434300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.434328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.434369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.434382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.434421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446462603027808255 len:1 00:06:58.981 [2024-07-12 13:32:47.434432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.434472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744069414584323 len:65536 00:06:58.981 [2024-07-12 13:32:47.434483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.981 #9 NEW cov: 12132 ft: 13032 corp: 5/185b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 ChangeBinInt- 00:06:58.981 [2024-07-12 13:32:47.484447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:58.981 [2024-07-12 13:32:47.484475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.484516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.484525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.484563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.484575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.484617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.484629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.981 #10 NEW cov: 12132 ft: 13154 corp: 6/231b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 ShuffleBytes- 00:06:58.981 [2024-07-12 13:32:47.544600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:58.981 [2024-07-12 13:32:47.544628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.544664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.544675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.544710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.544723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.981 [2024-07-12 13:32:47.544762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:58.981 [2024-07-12 13:32:47.544773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.242 #11 NEW cov: 12132 ft: 13248 corp: 7/277b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 ChangeByte- 00:06:59.242 [2024-07-12 13:32:47.594818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742978492891135 len:1 00:06:59.242 [2024-07-12 13:32:47.594844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.242 [2024-07-12 13:32:47.594890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069431361535 len:65536 00:06:59.242 [2024-07-12 13:32:47.594900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.242 [2024-07-12 13:32:47.594940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:59.242 [2024-07-12 13:32:47.594951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.242 [2024-07-12 13:32:47.594992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12884901888 len:65536 00:06:59.242 [2024-07-12 13:32:47.595003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.242 [2024-07-12 13:32:47.595046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65291 00:06:59.242 [2024-07-12 13:32:47.595057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.242 #12 NEW cov: 12132 ft: 13408 corp: 8/327b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:06:59.242 [2024-07-12 13:32:47.654700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65281 00:06:59.242 [2024-07-12 13:32:47.654727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.242 [2024-07-12 13:32:47.654762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4393751543808 len:65536 00:06:59.242 [2024-07-12 13:32:47.654775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.242 #13 NEW cov: 12132 ft: 13717 corp: 9/356b lim: 50 exec/s: 0 rss: 70Mb L: 29/50 MS: 1 EraseBytes- 00:06:59.242 [2024-07-12 13:32:47.715116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:1 00:06:59.242 [2024-07-12 13:32:47.715143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.242 [2024-07-12 13:32:47.715181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 00:06:59.243 [2024-07-12 13:32:47.715193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.715224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:59.243 [2024-07-12 13:32:47.715241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.715282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:59.243 [2024-07-12 13:32:47.715294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.715332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65291 00:06:59.243 [2024-07-12 13:32:47.715345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.243 #14 NEW cov: 12132 ft: 13782 corp: 10/406b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:06:59.243 [2024-07-12 13:32:47.755252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:1 00:06:59.243 [2024-07-12 13:32:47.755281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.755319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 00:06:59.243 [2024-07-12 13:32:47.755329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.755363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:59.243 [2024-07-12 13:32:47.755374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.755413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:59.243 [2024-07-12 13:32:47.755425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.755464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65291 00:06:59.243 [2024-07-12 13:32:47.755476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.243 #15 NEW cov: 12132 ft: 13828 corp: 11/456b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 ShuffleBytes- 00:06:59.243 [2024-07-12 13:32:47.815138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:59.243 [2024-07-12 13:32:47.815164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.243 [2024-07-12 13:32:47.815201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:59.243 [2024-07-12 13:32:47.815213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.503 #16 NEW cov: 12132 ft: 13879 corp: 12/484b lim: 50 exec/s: 0 rss: 72Mb L: 28/50 MS: 1 EraseBytes- 00:06:59.503 [2024-07-12 13:32:47.865212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:59.503 [2024-07-12 13:32:47.865243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:47.865284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446462800596303871 len:65536 00:06:59.503 [2024-07-12 13:32:47.865294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.503 #17 NEW cov: 12132 ft: 13905 corp: 13/510b lim: 50 exec/s: 0 rss: 72Mb L: 26/50 MS: 1 CrossOver- 00:06:59.503 [2024-07-12 13:32:47.915375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15914838024376868060 len:56541 00:06:59.503 [2024-07-12 13:32:47.915402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:47.915444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15914838024376868060 len:56541 00:06:59.503 [2024-07-12 13:32:47.915453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.503 #18 NEW cov: 12132 ft: 13913 corp: 14/533b lim: 50 exec/s: 0 rss: 72Mb L: 23/50 MS: 1 InsertRepeatedBytes- 00:06:59.503 [2024-07-12 13:32:47.965683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:59.503 [2024-07-12 13:32:47.965708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:47.965753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:59.503 [2024-07-12 13:32:47.965763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:47.965801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073701163007 len:65536 00:06:59.503 [2024-07-12 13:32:47.965814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:47.965858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:59.503 [2024-07-12 13:32:47.965869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.503 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:59.503 #19 NEW cov: 12155 ft: 13997 corp: 15/579b lim: 50 exec/s: 0 rss: 72Mb L: 46/50 MS: 1 ChangeBit- 00:06:59.503 [2024-07-12 13:32:48.025854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:59.503 [2024-07-12 13:32:48.025883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:48.025925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743012852629503 len:65536 00:06:59.503 [2024-07-12 13:32:48.025935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:48.025972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:59.503 [2024-07-12 13:32:48.025984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:48.026024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:59.503 [2024-07-12 13:32:48.026036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.503 #20 NEW cov: 12155 ft: 14024 corp: 16/625b lim: 50 exec/s: 0 rss: 72Mb L: 46/50 MS: 1 ChangeBinInt- 00:06:59.503 [2024-07-12 13:32:48.066006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742978490793983 len:1 00:06:59.503 [2024-07-12 13:32:48.066034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:48.066074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069431361535 len:65536 00:06:59.503 [2024-07-12 13:32:48.066084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:48.066123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:59.503 [2024-07-12 13:32:48.066136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:48.066174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12884901888 len:65536 00:06:59.503 [2024-07-12 13:32:48.066185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.503 [2024-07-12 13:32:48.066227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65291 00:06:59.503 [2024-07-12 13:32:48.066247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:59.763 #21 NEW cov: 12155 ft: 14058 corp: 17/675b lim: 50 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 ChangeBit- 00:06:59.763 [2024-07-12 13:32:48.115792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:59.763 [2024-07-12 13:32:48.115819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.763 #22 NEW cov: 12155 ft: 14334 corp: 18/689b lim: 50 exec/s: 22 rss: 72Mb L: 14/50 MS: 1 CrossOver- 00:06:59.763 [2024-07-12 13:32:48.166196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:06:59.763 [2024-07-12 13:32:48.166224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.763 [2024-07-12 13:32:48.166268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743584083279871 len:65536 00:06:59.763 [2024-07-12 13:32:48.166278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.763 [2024-07-12 13:32:48.166318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073701163007 len:65536 00:06:59.763 [2024-07-12 13:32:48.166330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.763 [2024-07-12 13:32:48.166371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:59.763 [2024-07-12 13:32:48.166382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.763 #23 NEW cov: 12155 ft: 14367 corp: 19/735b lim: 50 exec/s: 23 rss: 72Mb L: 46/50 MS: 1 ChangeByte- 00:06:59.763 [2024-07-12 13:32:48.226067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:47 00:06:59.763 [2024-07-12 13:32:48.226094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.763 #24 NEW cov: 12155 ft: 14393 corp: 20/752b lim: 50 exec/s: 24 rss: 72Mb L: 17/50 MS: 1 CrossOver- 00:06:59.763 [2024-07-12 13:32:48.286243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069551882239 len:47 00:06:59.763 [2024-07-12 13:32:48.286270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.763 #25 NEW cov: 12155 ft: 14418 corp: 21/769b lim: 50 exec/s: 25 rss: 72Mb L: 17/50 MS: 1 ChangeBit- 00:07:00.023 [2024-07-12 13:32:48.346529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374686483966590975 len:64256 00:07:00.023 [2024-07-12 13:32:48.346555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.023 [2024-07-12 13:32:48.346587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446462800596303871 len:65536 00:07:00.023 [2024-07-12 13:32:48.346599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.023 #26 NEW cov: 12155 ft: 14440 corp: 22/795b lim: 50 exec/s: 26 rss: 72Mb L: 26/50 MS: 1 ChangeBinInt- 00:07:00.023 [2024-07-12 13:32:48.406880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:07:00.023 [2024-07-12 13:32:48.406906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.023 [2024-07-12 13:32:48.406942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:00.023 [2024-07-12 13:32:48.406956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.023 [2024-07-12 13:32:48.406994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073701163007 len:65536 00:07:00.023 [2024-07-12 13:32:48.407007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.023 [2024-07-12 13:32:48.407048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:00.023 [2024-07-12 13:32:48.407059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.023 #27 NEW cov: 12155 ft: 14459 corp: 23/841b lim: 50 exec/s: 27 rss: 72Mb L: 46/50 MS: 1 ChangeBinInt- 00:07:00.023 [2024-07-12 13:32:48.446766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069420482559 len:65536 00:07:00.023 [2024-07-12 13:32:48.446792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.023 [2024-07-12 13:32:48.446834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:00.023 [2024-07-12 13:32:48.446843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.023 #28 NEW cov: 12155 ft: 14488 corp: 24/869b lim: 50 exec/s: 28 rss: 72Mb L: 28/50 MS: 1 ChangeByte- 00:07:00.023 [2024-07-12 13:32:48.506838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4611686014135500799 len:65536 00:07:00.023 [2024-07-12 13:32:48.506865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.023 #29 NEW cov: 12155 ft: 14537 corp: 25/883b lim: 50 exec/s: 29 rss: 72Mb L: 14/50 MS: 1 ChangeByte- 00:07:00.023 [2024-07-12 13:32:48.556963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742974201004031 len:12032 00:07:00.023 [2024-07-12 13:32:48.556989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.023 #30 NEW cov: 12155 ft: 14543 corp: 26/900b lim: 50 exec/s: 30 rss: 72Mb L: 17/50 MS: 1 CopyPart- 00:07:00.283 [2024-07-12 13:32:48.607491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742978490793983 len:1 00:07:00.283 [2024-07-12 13:32:48.607518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.283 [2024-07-12 13:32:48.607557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069431361535 len:65536 00:07:00.283 [2024-07-12 13:32:48.607568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.283 [2024-07-12 13:32:48.607600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:00.283 [2024-07-12 13:32:48.607612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.283 [2024-07-12 13:32:48.607653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12884901888 len:65536 00:07:00.283 [2024-07-12 13:32:48.607665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.283 [2024-07-12 13:32:48.607705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65291 00:07:00.283 [2024-07-12 13:32:48.607720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.283 #31 NEW cov: 12155 ft: 14578 corp: 27/950b lim: 50 exec/s: 31 rss: 72Mb L: 50/50 MS: 1 ShuffleBytes- 00:07:00.283 [2024-07-12 13:32:48.667538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:07:00.283 [2024-07-12 13:32:48.667565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.283 [2024-07-12 13:32:48.667603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709497087 len:65536 00:07:00.283 [2024-07-12 13:32:48.667613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.283 [2024-07-12 13:32:48.667645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709518847 len:65536 00:07:00.284 [2024-07-12 13:32:48.667656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.667697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:00.284 [2024-07-12 13:32:48.667709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.284 #32 NEW cov: 12155 ft: 14589 corp: 28/997b lim: 50 exec/s: 32 rss: 72Mb L: 47/50 MS: 1 InsertByte- 00:07:00.284 [2024-07-12 13:32:48.727700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:07:00.284 [2024-07-12 13:32:48.727726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.727765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743584083279871 len:65536 00:07:00.284 [2024-07-12 13:32:48.727776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.727807] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073701163007 len:65536 00:07:00.284 [2024-07-12 13:32:48.727818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.727859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:00.284 [2024-07-12 13:32:48.727871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.284 #33 NEW cov: 12155 ft: 14616 corp: 29/1043b lim: 50 exec/s: 33 rss: 72Mb L: 46/50 MS: 1 CrossOver- 00:07:00.284 [2024-07-12 13:32:48.787856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:07:00.284 [2024-07-12 13:32:48.787881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.787921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:00.284 [2024-07-12 13:32:48.787931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.787969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:00.284 [2024-07-12 13:32:48.787981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.788022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:00.284 [2024-07-12 13:32:48.788037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.284 #34 NEW cov: 12155 ft: 14621 corp: 30/1089b lim: 50 exec/s: 34 rss: 72Mb L: 46/50 MS: 1 ShuffleBytes- 00:07:00.284 [2024-07-12 13:32:48.827961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:65536 00:07:00.284 [2024-07-12 13:32:48.827987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.828025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743012852629503 len:65536 00:07:00.284 [2024-07-12 13:32:48.828035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.828070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:00.284 [2024-07-12 13:32:48.828083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.284 [2024-07-12 13:32:48.828126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:00.284 [2024-07-12 13:32:48.828138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.544 #35 NEW cov: 12155 ft: 14649 corp: 31/1135b lim: 50 exec/s: 35 rss: 72Mb L: 46/50 MS: 1 ShuffleBytes- 00:07:00.544 [2024-07-12 13:32:48.888105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742978490793983 len:1 00:07:00.544 [2024-07-12 13:32:48.888130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.888178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069431361535 len:65536 00:07:00.544 [2024-07-12 13:32:48.888187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.888226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:48.888242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.888283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:12884901888 len:65536 00:07:00.544 [2024-07-12 13:32:48.888295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.544 #36 NEW cov: 12155 ft: 14650 corp: 32/1182b lim: 50 exec/s: 36 rss: 73Mb L: 47/50 MS: 1 CrossOver- 00:07:00.544 [2024-07-12 13:32:48.948280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:48.948306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.948346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:48.948356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.948387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709420543 len:65536 00:07:00.544 [2024-07-12 13:32:48.948400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.948446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:48.948459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.544 #37 NEW cov: 12155 ft: 14672 corp: 33/1228b lim: 50 exec/s: 37 rss: 73Mb L: 46/50 MS: 1 ChangeBinInt- 00:07:00.544 [2024-07-12 13:32:48.988459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069417664511 len:1 00:07:00.544 [2024-07-12 13:32:48.988485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.988524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 00:07:00.544 [2024-07-12 13:32:48.988535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.988569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:48.988580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.988619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:48.988632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:48.988672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446742974197923839 len:65291 00:07:00.544 [2024-07-12 13:32:48.988686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.544 #38 NEW cov: 12155 ft: 14722 corp: 34/1278b lim: 50 exec/s: 38 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:07:00.544 [2024-07-12 13:32:49.048445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2817858979834560256 len:42053 00:07:00.544 [2024-07-12 13:32:49.048471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:49.048509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073306898431 len:65536 00:07:00.544 [2024-07-12 13:32:49.048519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:49.048557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:49.048568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.544 #39 NEW cov: 12155 ft: 14929 corp: 35/1314b lim: 50 exec/s: 39 rss: 73Mb L: 36/50 MS: 1 CMP- DE: "\000'\033\013\320\244D\347"- 00:07:00.544 [2024-07-12 13:32:49.098590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2817858979834298112 len:42053 00:07:00.544 [2024-07-12 13:32:49.098616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:49.098657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073306898431 len:65536 00:07:00.544 [2024-07-12 13:32:49.098667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.544 [2024-07-12 13:32:49.098708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:00.544 [2024-07-12 13:32:49.098723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.804 #40 NEW cov: 12155 ft: 14933 corp: 36/1350b lim: 50 exec/s: 20 rss: 73Mb L: 36/50 MS: 1 ChangeBit- 00:07:00.804 #40 DONE cov: 12155 ft: 14933 corp: 36/1350b lim: 50 exec/s: 20 rss: 73Mb 00:07:00.804 ###### Recommended dictionary. ###### 00:07:00.804 "\000'\033\013\320\244D\347" # Uses: 0 00:07:00.804 ###### End of recommended dictionary. ###### 00:07:00.804 Done 40 runs in 2 second(s) 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:00.804 13:32:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:00.804 [2024-07-12 13:32:49.275015] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:00.804 [2024-07-12 13:32:49.275093] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447465 ] 00:07:00.804 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.064 [2024-07-12 13:32:49.427998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.064 [2024-07-12 13:32:49.480517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.064 [2024-07-12 13:32:49.542065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.064 [2024-07-12 13:32:49.558421] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:01.064 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.064 INFO: Seed: 967026628 00:07:01.064 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:01.064 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:01.064 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:01.064 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.064 #2 INITED exec/s: 0 rss: 65Mb 00:07:01.064 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.064 This may also happen if the target rejected all inputs we tried so far 00:07:01.064 [2024-07-12 13:32:49.616652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.064 [2024-07-12 13:32:49.616684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.064 [2024-07-12 13:32:49.616725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.064 [2024-07-12 13:32:49.616735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.065 [2024-07-12 13:32:49.616777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.065 [2024-07-12 13:32:49.616789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.065 [2024-07-12 13:32:49.616829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.065 [2024-07-12 13:32:49.616841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.324 NEW_FUNC[1/697]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:01.324 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.324 #21 NEW cov: 11969 ft: 11960 corp: 2/83b lim: 90 exec/s: 0 rss: 72Mb L: 82/82 MS: 4 ChangeBinInt-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:01.325 [2024-07-12 13:32:49.797703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.325 [2024-07-12 13:32:49.797750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.325 [2024-07-12 13:32:49.797812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.325 [2024-07-12 13:32:49.797832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.325 [2024-07-12 13:32:49.797892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.325 [2024-07-12 13:32:49.797910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.325 [2024-07-12 13:32:49.797970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.325 [2024-07-12 13:32:49.797987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.325 #22 NEW cov: 12099 ft: 12654 corp: 3/166b lim: 90 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 CrossOver- 00:07:01.325 [2024-07-12 13:32:49.867425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.325 [2024-07-12 13:32:49.867451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.325 [2024-07-12 13:32:49.867500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.325 [2024-07-12 13:32:49.867509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.325 [2024-07-12 13:32:49.867551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.325 [2024-07-12 13:32:49.867564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.325 [2024-07-12 13:32:49.867609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.325 [2024-07-12 13:32:49.867624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.585 #23 NEW cov: 12105 ft: 12815 corp: 4/249b lim: 90 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 ChangeByte- 00:07:01.585 [2024-07-12 13:32:49.927579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.585 [2024-07-12 13:32:49.927608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.585 [2024-07-12 13:32:49.927654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.585 [2024-07-12 13:32:49.927664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.585 [2024-07-12 13:32:49.927710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.585 [2024-07-12 13:32:49.927722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.585 [2024-07-12 13:32:49.927768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.585 [2024-07-12 13:32:49.927780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.585 #24 NEW cov: 12190 ft: 13117 corp: 5/332b lim: 90 exec/s: 0 rss: 72Mb L: 83/83 MS: 1 ChangeByte- 00:07:01.585 [2024-07-12 13:32:49.987874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.585 [2024-07-12 13:32:49.987900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.585 [2024-07-12 13:32:49.987943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.586 [2024-07-12 13:32:49.987955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:49.987990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.586 [2024-07-12 13:32:49.988003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:49.988047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.586 [2024-07-12 13:32:49.988059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:49.988105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:01.586 [2024-07-12 13:32:49.988117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.586 #35 NEW cov: 12190 ft: 13277 corp: 6/422b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:01.586 [2024-07-12 13:32:50.037997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.586 [2024-07-12 13:32:50.038025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.038069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.586 [2024-07-12 13:32:50.038080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.038117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.586 [2024-07-12 13:32:50.038128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.038178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.586 [2024-07-12 13:32:50.038190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.038240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:01.586 [2024-07-12 13:32:50.038252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.586 #36 NEW cov: 12190 ft: 13421 corp: 7/512b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:07:01.586 [2024-07-12 13:32:50.098044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.586 [2024-07-12 13:32:50.098073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.098121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.586 [2024-07-12 13:32:50.098131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.098176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.586 [2024-07-12 13:32:50.098189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.098240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.586 [2024-07-12 13:32:50.098252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.586 #41 NEW cov: 12190 ft: 13490 corp: 8/593b lim: 90 exec/s: 0 rss: 72Mb L: 81/90 MS: 5 ChangeByte-ChangeBit-InsertByte-CrossOver-CrossOver- 00:07:01.586 [2024-07-12 13:32:50.148304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.586 [2024-07-12 13:32:50.148332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.148375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.586 [2024-07-12 13:32:50.148387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.148412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.586 [2024-07-12 13:32:50.148425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.148471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.586 [2024-07-12 13:32:50.148484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.586 [2024-07-12 13:32:50.148529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:01.586 [2024-07-12 13:32:50.148542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.846 #42 NEW cov: 12190 ft: 13545 corp: 9/683b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeBit- 00:07:01.846 [2024-07-12 13:32:50.188279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.846 [2024-07-12 13:32:50.188306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.846 [2024-07-12 13:32:50.188353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.846 [2024-07-12 13:32:50.188363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.846 [2024-07-12 13:32:50.188411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.846 [2024-07-12 13:32:50.188424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.846 [2024-07-12 13:32:50.188473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.846 [2024-07-12 13:32:50.188484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.846 #43 NEW cov: 12190 ft: 13574 corp: 10/770b lim: 90 exec/s: 0 rss: 72Mb L: 87/90 MS: 1 InsertRepeatedBytes- 00:07:01.846 [2024-07-12 13:32:50.238513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.847 [2024-07-12 13:32:50.238539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.238581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.847 [2024-07-12 13:32:50.238594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.238628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.847 [2024-07-12 13:32:50.238641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.238688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.847 [2024-07-12 13:32:50.238701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.238749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:01.847 [2024-07-12 13:32:50.238762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.847 #44 NEW cov: 12190 ft: 13601 corp: 11/860b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CopyPart- 00:07:01.847 [2024-07-12 13:32:50.278656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.847 [2024-07-12 13:32:50.278682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.278724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.847 [2024-07-12 13:32:50.278737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.278772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.847 [2024-07-12 13:32:50.278784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.278830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.847 [2024-07-12 13:32:50.278842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.278889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:01.847 [2024-07-12 13:32:50.278901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.847 #45 NEW cov: 12190 ft: 13636 corp: 12/950b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CMP- DE: "\302$\304\200\014\033'\000"- 00:07:01.847 [2024-07-12 13:32:50.338668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.847 [2024-07-12 13:32:50.338697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.338745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.847 [2024-07-12 13:32:50.338755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.338800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.847 [2024-07-12 13:32:50.338813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.338858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.847 [2024-07-12 13:32:50.338870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.847 #46 NEW cov: 12190 ft: 13644 corp: 13/1037b lim: 90 exec/s: 0 rss: 73Mb L: 87/90 MS: 1 CopyPart- 00:07:01.847 [2024-07-12 13:32:50.398826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:01.847 [2024-07-12 13:32:50.398852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.398897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:01.847 [2024-07-12 13:32:50.398907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.398954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:01.847 [2024-07-12 13:32:50.398967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.847 [2024-07-12 13:32:50.399016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:01.847 [2024-07-12 13:32:50.399028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.847 #47 NEW cov: 12190 ft: 13657 corp: 14/1125b lim: 90 exec/s: 0 rss: 73Mb L: 88/90 MS: 1 CrossOver- 00:07:02.108 [2024-07-12 13:32:50.449123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.108 [2024-07-12 13:32:50.449150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.108 [2024-07-12 13:32:50.449193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.108 [2024-07-12 13:32:50.449206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.108 [2024-07-12 13:32:50.449242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.108 [2024-07-12 13:32:50.449254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.108 [2024-07-12 13:32:50.449299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.108 [2024-07-12 13:32:50.449311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.108 [2024-07-12 13:32:50.449359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.108 [2024-07-12 13:32:50.449372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.108 #48 NEW cov: 12190 ft: 13675 corp: 15/1215b lim: 90 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 ChangeByte- 00:07:02.108 [2024-07-12 13:32:50.499087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.108 [2024-07-12 13:32:50.499117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.108 [2024-07-12 13:32:50.499166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.108 [2024-07-12 13:32:50.499175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.108 [2024-07-12 13:32:50.499219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.108 [2024-07-12 13:32:50.499235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.108 [2024-07-12 13:32:50.499282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.108 [2024-07-12 13:32:50.499294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.108 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:02.108 #49 NEW cov: 12213 ft: 13790 corp: 16/1302b lim: 90 exec/s: 0 rss: 73Mb L: 87/90 MS: 1 ChangeBinInt- 00:07:02.108 [2024-07-12 13:32:50.559258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.108 [2024-07-12 13:32:50.559287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.559335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.109 [2024-07-12 13:32:50.559345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.559388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.109 [2024-07-12 13:32:50.559399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.559445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.109 [2024-07-12 13:32:50.559457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.109 #50 NEW cov: 12213 ft: 13861 corp: 17/1384b lim: 90 exec/s: 50 rss: 73Mb L: 82/90 MS: 1 InsertByte- 00:07:02.109 [2024-07-12 13:32:50.619542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.109 [2024-07-12 13:32:50.619570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.619615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.109 [2024-07-12 13:32:50.619626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.619662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.109 [2024-07-12 13:32:50.619675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.619723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.109 [2024-07-12 13:32:50.619736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.619783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.109 [2024-07-12 13:32:50.619794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.109 #51 NEW cov: 12213 ft: 13872 corp: 18/1474b lim: 90 exec/s: 51 rss: 73Mb L: 90/90 MS: 1 ShuffleBytes- 00:07:02.109 [2024-07-12 13:32:50.659515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.109 [2024-07-12 13:32:50.659542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.659588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.109 [2024-07-12 13:32:50.659598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.659639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.109 [2024-07-12 13:32:50.659651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.109 [2024-07-12 13:32:50.659699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.109 [2024-07-12 13:32:50.659712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.109 #52 NEW cov: 12213 ft: 13909 corp: 19/1561b lim: 90 exec/s: 52 rss: 73Mb L: 87/90 MS: 1 CopyPart- 00:07:02.369 [2024-07-12 13:32:50.709766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.370 [2024-07-12 13:32:50.709794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.709837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.370 [2024-07-12 13:32:50.709849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.709882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.370 [2024-07-12 13:32:50.709895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.709941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.370 [2024-07-12 13:32:50.709952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.709999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.370 [2024-07-12 13:32:50.710011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.370 #53 NEW cov: 12213 ft: 13934 corp: 20/1651b lim: 90 exec/s: 53 rss: 73Mb L: 90/90 MS: 1 PersAutoDict- DE: "\302$\304\200\014\033'\000"- 00:07:02.370 [2024-07-12 13:32:50.759474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.370 [2024-07-12 13:32:50.759500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.759542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.370 [2024-07-12 13:32:50.759553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.370 #54 NEW cov: 12213 ft: 14374 corp: 21/1698b lim: 90 exec/s: 54 rss: 73Mb L: 47/90 MS: 1 EraseBytes- 00:07:02.370 [2024-07-12 13:32:50.820069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.370 [2024-07-12 13:32:50.820096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.820143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.370 [2024-07-12 13:32:50.820159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.820204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.370 [2024-07-12 13:32:50.820216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.820263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.370 [2024-07-12 13:32:50.820276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.820325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.370 [2024-07-12 13:32:50.820337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.370 #55 NEW cov: 12213 ft: 14393 corp: 22/1788b lim: 90 exec/s: 55 rss: 73Mb L: 90/90 MS: 1 ChangeByte- 00:07:02.370 [2024-07-12 13:32:50.880239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.370 [2024-07-12 13:32:50.880266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.880310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.370 [2024-07-12 13:32:50.880322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.880352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.370 [2024-07-12 13:32:50.880364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.880408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.370 [2024-07-12 13:32:50.880421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.880468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.370 [2024-07-12 13:32:50.880480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.370 #56 NEW cov: 12213 ft: 14415 corp: 23/1878b lim: 90 exec/s: 56 rss: 73Mb L: 90/90 MS: 1 ChangeByte- 00:07:02.370 [2024-07-12 13:32:50.940399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.370 [2024-07-12 13:32:50.940427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.940473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.370 [2024-07-12 13:32:50.940483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.940524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.370 [2024-07-12 13:32:50.940537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.940583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.370 [2024-07-12 13:32:50.940595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.370 [2024-07-12 13:32:50.940640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.370 [2024-07-12 13:32:50.940652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.631 #57 NEW cov: 12213 ft: 14423 corp: 24/1968b lim: 90 exec/s: 57 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:02.631 [2024-07-12 13:32:50.980481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.631 [2024-07-12 13:32:50.980508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:50.980550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.631 [2024-07-12 13:32:50.980562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:50.980601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.631 [2024-07-12 13:32:50.980612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:50.980656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.631 [2024-07-12 13:32:50.980668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:50.980715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.631 [2024-07-12 13:32:50.980728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.631 #58 NEW cov: 12213 ft: 14435 corp: 25/2058b lim: 90 exec/s: 58 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:07:02.631 [2024-07-12 13:32:51.040525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.631 [2024-07-12 13:32:51.040552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.040596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.631 [2024-07-12 13:32:51.040606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.040647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.631 [2024-07-12 13:32:51.040660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.040709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.631 [2024-07-12 13:32:51.040721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.631 #59 NEW cov: 12213 ft: 14437 corp: 26/2146b lim: 90 exec/s: 59 rss: 73Mb L: 88/90 MS: 1 CrossOver- 00:07:02.631 [2024-07-12 13:32:51.090663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.631 [2024-07-12 13:32:51.090690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.090737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.631 [2024-07-12 13:32:51.090746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.090792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.631 [2024-07-12 13:32:51.090804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.090851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.631 [2024-07-12 13:32:51.090867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.631 #60 NEW cov: 12213 ft: 14448 corp: 27/2221b lim: 90 exec/s: 60 rss: 73Mb L: 75/90 MS: 1 EraseBytes- 00:07:02.631 [2024-07-12 13:32:51.150794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.631 [2024-07-12 13:32:51.150822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.150865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.631 [2024-07-12 13:32:51.150876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.150919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.631 [2024-07-12 13:32:51.150932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.150981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.631 [2024-07-12 13:32:51.150993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.631 #61 NEW cov: 12213 ft: 14452 corp: 28/2306b lim: 90 exec/s: 61 rss: 73Mb L: 85/90 MS: 1 EraseBytes- 00:07:02.631 [2024-07-12 13:32:51.191050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.631 [2024-07-12 13:32:51.191077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.191120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.631 [2024-07-12 13:32:51.191133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.191166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.631 [2024-07-12 13:32:51.191178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.191223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.631 [2024-07-12 13:32:51.191239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.631 [2024-07-12 13:32:51.191288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:02.631 [2024-07-12 13:32:51.191300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:02.891 #62 NEW cov: 12213 ft: 14454 corp: 29/2396b lim: 90 exec/s: 62 rss: 73Mb L: 90/90 MS: 1 ChangeBit- 00:07:02.891 [2024-07-12 13:32:51.241032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.891 [2024-07-12 13:32:51.241058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.241103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.891 [2024-07-12 13:32:51.241113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.241153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.891 [2024-07-12 13:32:51.241165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.241214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.891 [2024-07-12 13:32:51.241234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.301211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.891 [2024-07-12 13:32:51.301243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.301292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.891 [2024-07-12 13:32:51.301301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.301347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.891 [2024-07-12 13:32:51.301360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.301407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.891 [2024-07-12 13:32:51.301419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.891 #64 NEW cov: 12213 ft: 14507 corp: 30/2479b lim: 90 exec/s: 64 rss: 74Mb L: 83/90 MS: 2 CopyPart-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:02.891 [2024-07-12 13:32:51.351210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.891 [2024-07-12 13:32:51.351240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.351287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.891 [2024-07-12 13:32:51.351297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.351343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.891 [2024-07-12 13:32:51.351355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.891 #65 NEW cov: 12213 ft: 14771 corp: 31/2548b lim: 90 exec/s: 65 rss: 74Mb L: 69/90 MS: 1 EraseBytes- 00:07:02.891 [2024-07-12 13:32:51.411538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.891 [2024-07-12 13:32:51.411564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.411610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.891 [2024-07-12 13:32:51.411620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.411665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.891 [2024-07-12 13:32:51.411678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.891 [2024-07-12 13:32:51.411727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.892 [2024-07-12 13:32:51.411739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.892 #66 NEW cov: 12213 ft: 14789 corp: 32/2636b lim: 90 exec/s: 66 rss: 74Mb L: 88/90 MS: 1 ChangeBit- 00:07:02.892 [2024-07-12 13:32:51.471710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:02.892 [2024-07-12 13:32:51.471737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.892 [2024-07-12 13:32:51.471787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:02.892 [2024-07-12 13:32:51.471797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.892 [2024-07-12 13:32:51.471843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:02.892 [2024-07-12 13:32:51.471856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.892 [2024-07-12 13:32:51.471903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:02.892 [2024-07-12 13:32:51.471915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.152 #67 NEW cov: 12213 ft: 14810 corp: 33/2721b lim: 90 exec/s: 67 rss: 74Mb L: 85/90 MS: 1 ChangeBit- 00:07:03.152 [2024-07-12 13:32:51.531833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.152 [2024-07-12 13:32:51.531860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.152 [2024-07-12 13:32:51.531905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.152 [2024-07-12 13:32:51.531915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.152 [2024-07-12 13:32:51.531960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.152 [2024-07-12 13:32:51.531973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.152 [2024-07-12 13:32:51.532020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:03.152 [2024-07-12 13:32:51.532032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.152 #68 NEW cov: 12213 ft: 14824 corp: 34/2808b lim: 90 exec/s: 68 rss: 74Mb L: 87/90 MS: 1 ChangeBit- 00:07:03.152 [2024-07-12 13:32:51.592025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.152 [2024-07-12 13:32:51.592052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.152 [2024-07-12 13:32:51.592096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.152 [2024-07-12 13:32:51.592105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.152 [2024-07-12 13:32:51.592148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.152 [2024-07-12 13:32:51.592160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.152 [2024-07-12 13:32:51.592208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:03.152 [2024-07-12 13:32:51.592220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.152 #69 NEW cov: 12213 ft: 14842 corp: 35/2894b lim: 90 exec/s: 34 rss: 74Mb L: 86/90 MS: 1 InsertByte- 00:07:03.152 #69 DONE cov: 12213 ft: 14842 corp: 35/2894b lim: 90 exec/s: 34 rss: 74Mb 00:07:03.152 ###### Recommended dictionary. ###### 00:07:03.152 "\377\377\377\377\377\377\377\000" # Uses: 0 00:07:03.152 "\302$\304\200\014\033'\000" # Uses: 1 00:07:03.152 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:03.152 ###### End of recommended dictionary. ###### 00:07:03.152 Done 69 runs in 2 second(s) 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.152 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.153 13:32:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:03.413 [2024-07-12 13:32:51.753420] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:03.413 [2024-07-12 13:32:51.753523] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2448000 ] 00:07:03.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.413 [2024-07-12 13:32:51.907966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.413 [2024-07-12 13:32:51.963472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.672 [2024-07-12 13:32:52.025066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.672 [2024-07-12 13:32:52.041413] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:03.672 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.672 INFO: Seed: 3451034237 00:07:03.672 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:03.672 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:03.672 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:03.672 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.672 #2 INITED exec/s: 0 rss: 64Mb 00:07:03.672 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.672 This may also happen if the target rejected all inputs we tried so far 00:07:03.672 [2024-07-12 13:32:52.109132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:03.672 [2024-07-12 13:32:52.109177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.672 [2024-07-12 13:32:52.109274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:03.672 [2024-07-12 13:32:52.109296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.672 [2024-07-12 13:32:52.109423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:03.672 [2024-07-12 13:32:52.109443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.672 [2024-07-12 13:32:52.109569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:03.672 [2024-07-12 13:32:52.109588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.932 NEW_FUNC[1/696]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:03.932 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:03.933 #21 NEW cov: 11940 ft: 11940 corp: 2/44b lim: 50 exec/s: 0 rss: 70Mb L: 43/43 MS: 4 InsertByte-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:03.933 [2024-07-12 13:32:52.300036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:03.933 [2024-07-12 13:32:52.300091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.300180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:03.933 [2024-07-12 13:32:52.300202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.300334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:03.933 [2024-07-12 13:32:52.300361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.300489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:03.933 [2024-07-12 13:32:52.300512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:03.933 NEW_FUNC[1/1]: 0x17ad780 in nvme_qpair_check_enabled /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:637 00:07:03.933 #22 NEW cov: 12074 ft: 12541 corp: 3/84b lim: 50 exec/s: 0 rss: 70Mb L: 40/43 MS: 1 EraseBytes- 00:07:03.933 [2024-07-12 13:32:52.379315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:03.933 [2024-07-12 13:32:52.379355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.379463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:03.933 [2024-07-12 13:32:52.379480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.933 #25 NEW cov: 12080 ft: 13114 corp: 4/111b lim: 50 exec/s: 0 rss: 70Mb L: 27/43 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:07:03.933 [2024-07-12 13:32:52.439500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:03.933 [2024-07-12 13:32:52.439533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.439653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:03.933 [2024-07-12 13:32:52.439665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.933 #26 NEW cov: 12165 ft: 13383 corp: 5/139b lim: 50 exec/s: 0 rss: 70Mb L: 28/43 MS: 1 InsertRepeatedBytes- 00:07:03.933 [2024-07-12 13:32:52.500450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:03.933 [2024-07-12 13:32:52.500484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.500588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:03.933 [2024-07-12 13:32:52.500608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.500644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:03.933 [2024-07-12 13:32:52.500661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.933 [2024-07-12 13:32:52.500785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:03.933 [2024-07-12 13:32:52.500803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.193 #27 NEW cov: 12165 ft: 13539 corp: 6/179b lim: 50 exec/s: 0 rss: 70Mb L: 40/43 MS: 1 CrossOver- 00:07:04.193 [2024-07-12 13:32:52.570123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.193 [2024-07-12 13:32:52.570157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.193 [2024-07-12 13:32:52.570284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.193 [2024-07-12 13:32:52.570296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.193 #28 NEW cov: 12165 ft: 13697 corp: 7/206b lim: 50 exec/s: 0 rss: 70Mb L: 27/43 MS: 1 ChangeBit- 00:07:04.193 [2024-07-12 13:32:52.640291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.193 [2024-07-12 13:32:52.640321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.193 [2024-07-12 13:32:52.640427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.193 [2024-07-12 13:32:52.640441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.193 #29 NEW cov: 12165 ft: 13791 corp: 8/233b lim: 50 exec/s: 0 rss: 70Mb L: 27/43 MS: 1 ChangeByte- 00:07:04.193 [2024-07-12 13:32:52.720567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.193 [2024-07-12 13:32:52.720600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.193 [2024-07-12 13:32:52.720710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.193 [2024-07-12 13:32:52.720727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.193 #30 NEW cov: 12165 ft: 13854 corp: 9/262b lim: 50 exec/s: 0 rss: 72Mb L: 29/43 MS: 1 InsertByte- 00:07:04.455 [2024-07-12 13:32:52.790845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.455 [2024-07-12 13:32:52.790876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.790981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.455 [2024-07-12 13:32:52.790999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.455 #31 NEW cov: 12165 ft: 13914 corp: 10/290b lim: 50 exec/s: 0 rss: 72Mb L: 28/43 MS: 1 ChangeBinInt- 00:07:04.455 [2024-07-12 13:32:52.851754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.455 [2024-07-12 13:32:52.851784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.851879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.455 [2024-07-12 13:32:52.851898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.851953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:04.455 [2024-07-12 13:32:52.851972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.852091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:04.455 [2024-07-12 13:32:52.852112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.455 #35 NEW cov: 12165 ft: 14015 corp: 11/330b lim: 50 exec/s: 0 rss: 72Mb L: 40/43 MS: 4 ShuffleBytes-CMP-EraseBytes-InsertRepeatedBytes- DE: "\005\000"- 00:07:04.455 [2024-07-12 13:32:52.912045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.455 [2024-07-12 13:32:52.912076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.912175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.455 [2024-07-12 13:32:52.912195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.912259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:04.455 [2024-07-12 13:32:52.912279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.912398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:04.455 [2024-07-12 13:32:52.912421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.455 #36 NEW cov: 12165 ft: 14090 corp: 12/370b lim: 50 exec/s: 0 rss: 72Mb L: 40/43 MS: 1 ChangeBit- 00:07:04.455 [2024-07-12 13:32:52.971562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.455 [2024-07-12 13:32:52.971594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.455 [2024-07-12 13:32:52.971695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.455 [2024-07-12 13:32:52.971711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.455 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:04.455 #37 NEW cov: 12188 ft: 14160 corp: 13/397b lim: 50 exec/s: 0 rss: 72Mb L: 27/43 MS: 1 ChangeByte- 00:07:04.716 [2024-07-12 13:32:53.052516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.716 [2024-07-12 13:32:53.052550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.716 [2024-07-12 13:32:53.052656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.716 [2024-07-12 13:32:53.052676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.716 [2024-07-12 13:32:53.052713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:04.716 [2024-07-12 13:32:53.052726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.716 [2024-07-12 13:32:53.052846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:04.716 [2024-07-12 13:32:53.052864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.716 #38 NEW cov: 12188 ft: 14195 corp: 14/438b lim: 50 exec/s: 38 rss: 72Mb L: 41/43 MS: 1 InsertRepeatedBytes- 00:07:04.716 [2024-07-12 13:32:53.132789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.716 [2024-07-12 13:32:53.132822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.716 [2024-07-12 13:32:53.132917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.716 [2024-07-12 13:32:53.132936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.716 [2024-07-12 13:32:53.133000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:04.716 [2024-07-12 13:32:53.133016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.716 [2024-07-12 13:32:53.133135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:04.716 [2024-07-12 13:32:53.133155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.716 #39 NEW cov: 12188 ft: 14205 corp: 15/481b lim: 50 exec/s: 39 rss: 72Mb L: 43/43 MS: 1 ChangeByte- 00:07:04.716 [2024-07-12 13:32:53.193123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.716 [2024-07-12 13:32:53.193159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.717 [2024-07-12 13:32:53.193258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.717 [2024-07-12 13:32:53.193279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.717 [2024-07-12 13:32:53.193346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:04.717 [2024-07-12 13:32:53.193361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.717 [2024-07-12 13:32:53.193474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:04.717 [2024-07-12 13:32:53.193489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.717 #40 NEW cov: 12188 ft: 14258 corp: 16/529b lim: 50 exec/s: 40 rss: 72Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:07:04.717 [2024-07-12 13:32:53.252563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.717 [2024-07-12 13:32:53.252593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.717 [2024-07-12 13:32:53.252692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.717 [2024-07-12 13:32:53.252710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.717 #41 NEW cov: 12188 ft: 14302 corp: 17/556b lim: 50 exec/s: 41 rss: 72Mb L: 27/48 MS: 1 ChangeByte- 00:07:04.978 [2024-07-12 13:32:53.313596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.978 [2024-07-12 13:32:53.313626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.313722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.978 [2024-07-12 13:32:53.313742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.313797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:04.978 [2024-07-12 13:32:53.313818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.313940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:04.978 [2024-07-12 13:32:53.313960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.978 #42 NEW cov: 12188 ft: 14322 corp: 18/605b lim: 50 exec/s: 42 rss: 72Mb L: 49/49 MS: 1 CrossOver- 00:07:04.978 [2024-07-12 13:32:53.383106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.978 [2024-07-12 13:32:53.383136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.383263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.978 [2024-07-12 13:32:53.383275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.978 #43 NEW cov: 12188 ft: 14384 corp: 19/634b lim: 50 exec/s: 43 rss: 72Mb L: 29/49 MS: 1 PersAutoDict- DE: "\005\000"- 00:07:04.978 [2024-07-12 13:32:53.453435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.978 [2024-07-12 13:32:53.453467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.453565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.978 [2024-07-12 13:32:53.453584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.978 #44 NEW cov: 12188 ft: 14390 corp: 20/663b lim: 50 exec/s: 44 rss: 72Mb L: 29/49 MS: 1 PersAutoDict- DE: "\005\000"- 00:07:04.978 [2024-07-12 13:32:53.514367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:04.978 [2024-07-12 13:32:53.514398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.514498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:04.978 [2024-07-12 13:32:53.514519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.514579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:04.978 [2024-07-12 13:32:53.514600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.978 [2024-07-12 13:32:53.514711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:04.978 [2024-07-12 13:32:53.514726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:04.978 #45 NEW cov: 12188 ft: 14457 corp: 21/703b lim: 50 exec/s: 45 rss: 72Mb L: 40/49 MS: 1 ChangeBit- 00:07:05.239 [2024-07-12 13:32:53.574356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.239 [2024-07-12 13:32:53.574389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.574473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.239 [2024-07-12 13:32:53.574491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.574575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.239 [2024-07-12 13:32:53.574593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.239 #46 NEW cov: 12188 ft: 14710 corp: 22/740b lim: 50 exec/s: 46 rss: 72Mb L: 37/49 MS: 1 EraseBytes- 00:07:05.239 [2024-07-12 13:32:53.654958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.239 [2024-07-12 13:32:53.654992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.655091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.239 [2024-07-12 13:32:53.655111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.655179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.239 [2024-07-12 13:32:53.655195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.655312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.239 [2024-07-12 13:32:53.655330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.239 #47 NEW cov: 12188 ft: 14737 corp: 23/789b lim: 50 exec/s: 47 rss: 72Mb L: 49/49 MS: 1 CopyPart- 00:07:05.239 [2024-07-12 13:32:53.714862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.239 [2024-07-12 13:32:53.714892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.715000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.239 [2024-07-12 13:32:53.715017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.715074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.239 [2024-07-12 13:32:53.715091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.239 #48 NEW cov: 12188 ft: 14802 corp: 24/820b lim: 50 exec/s: 48 rss: 72Mb L: 31/49 MS: 1 CMP- DE: "\001\000\001%"- 00:07:05.239 [2024-07-12 13:32:53.785481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.239 [2024-07-12 13:32:53.785512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.785613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.239 [2024-07-12 13:32:53.785632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.785694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.239 [2024-07-12 13:32:53.785714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.239 [2024-07-12 13:32:53.785836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.239 [2024-07-12 13:32:53.785854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.500 #49 NEW cov: 12188 ft: 14839 corp: 25/869b lim: 50 exec/s: 49 rss: 72Mb L: 49/49 MS: 1 ChangeBit- 00:07:05.500 [2024-07-12 13:32:53.855799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.500 [2024-07-12 13:32:53.855838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.500 [2024-07-12 13:32:53.855939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.500 [2024-07-12 13:32:53.855956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.500 [2024-07-12 13:32:53.855982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:05.500 [2024-07-12 13:32:53.856001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.500 [2024-07-12 13:32:53.856117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:05.500 [2024-07-12 13:32:53.856133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.500 #55 NEW cov: 12188 ft: 14840 corp: 26/917b lim: 50 exec/s: 55 rss: 72Mb L: 48/49 MS: 1 InsertRepeatedBytes- 00:07:05.500 [2024-07-12 13:32:53.915158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.500 [2024-07-12 13:32:53.915190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.501 [2024-07-12 13:32:53.915303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.501 [2024-07-12 13:32:53.915317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.501 #56 NEW cov: 12188 ft: 14866 corp: 27/944b lim: 50 exec/s: 56 rss: 72Mb L: 27/49 MS: 1 ChangeByte- 00:07:05.501 [2024-07-12 13:32:53.975431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.501 [2024-07-12 13:32:53.975461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.501 [2024-07-12 13:32:53.975557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.501 [2024-07-12 13:32:53.975577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.501 #57 NEW cov: 12188 ft: 14873 corp: 28/973b lim: 50 exec/s: 57 rss: 72Mb L: 29/49 MS: 1 CrossOver- 00:07:05.501 [2024-07-12 13:32:54.045664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.501 [2024-07-12 13:32:54.045697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.501 [2024-07-12 13:32:54.045794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.501 [2024-07-12 13:32:54.045815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.501 #58 NEW cov: 12188 ft: 14877 corp: 29/1000b lim: 50 exec/s: 58 rss: 72Mb L: 27/49 MS: 1 ShuffleBytes- 00:07:05.762 [2024-07-12 13:32:54.105849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.762 [2024-07-12 13:32:54.105879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.762 [2024-07-12 13:32:54.105974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:05.762 [2024-07-12 13:32:54.105990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.762 #59 NEW cov: 12188 ft: 14887 corp: 30/1027b lim: 50 exec/s: 29 rss: 72Mb L: 27/49 MS: 1 CrossOver- 00:07:05.762 #59 DONE cov: 12188 ft: 14887 corp: 30/1027b lim: 50 exec/s: 29 rss: 72Mb 00:07:05.762 ###### Recommended dictionary. ###### 00:07:05.762 "\005\000" # Uses: 2 00:07:05.762 "\001\000\001%" # Uses: 0 00:07:05.762 ###### End of recommended dictionary. ###### 00:07:05.762 Done 59 runs in 2 second(s) 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:05.762 13:32:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:05.762 [2024-07-12 13:32:54.269145] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:05.762 [2024-07-12 13:32:54.269215] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2448444 ] 00:07:05.762 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.022 [2024-07-12 13:32:54.439771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.022 [2024-07-12 13:32:54.498365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.022 [2024-07-12 13:32:54.560430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.023 [2024-07-12 13:32:54.576729] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:06.023 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.023 INFO: Seed: 1693056807 00:07:06.283 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:06.284 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:06.284 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:06.284 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.284 #2 INITED exec/s: 0 rss: 64Mb 00:07:06.284 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.284 This may also happen if the target rejected all inputs we tried so far 00:07:06.284 [2024-07-12 13:32:54.644341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.284 [2024-07-12 13:32:54.644376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.284 [2024-07-12 13:32:54.644466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.284 [2024-07-12 13:32:54.644487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.284 [2024-07-12 13:32:54.644512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.284 [2024-07-12 13:32:54.644529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.284 [2024-07-12 13:32:54.644661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.284 [2024-07-12 13:32:54.644678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.284 NEW_FUNC[1/697]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:06.284 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.284 #9 NEW cov: 11970 ft: 11964 corp: 2/82b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:06.284 [2024-07-12 13:32:54.835114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.284 [2024-07-12 13:32:54.835166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.284 [2024-07-12 13:32:54.835276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.284 [2024-07-12 13:32:54.835298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.284 [2024-07-12 13:32:54.835418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.284 [2024-07-12 13:32:54.835440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.284 [2024-07-12 13:32:54.835567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.284 [2024-07-12 13:32:54.835589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.545 #10 NEW cov: 12100 ft: 12530 corp: 3/163b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 ShuffleBytes- 00:07:06.545 [2024-07-12 13:32:54.915430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.545 [2024-07-12 13:32:54.915464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:54.915560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.545 [2024-07-12 13:32:54.915582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:54.915612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.545 [2024-07-12 13:32:54.915629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:54.915747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.545 [2024-07-12 13:32:54.915766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.545 #11 NEW cov: 12106 ft: 12808 corp: 4/244b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 ShuffleBytes- 00:07:06.545 [2024-07-12 13:32:54.975693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.545 [2024-07-12 13:32:54.975730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:54.975843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.545 [2024-07-12 13:32:54.975862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:54.975943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.545 [2024-07-12 13:32:54.975962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:54.976081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.545 [2024-07-12 13:32:54.976098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.545 #12 NEW cov: 12191 ft: 13213 corp: 5/321b lim: 85 exec/s: 0 rss: 70Mb L: 77/81 MS: 1 EraseBytes- 00:07:06.545 [2024-07-12 13:32:55.035938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.545 [2024-07-12 13:32:55.035968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:55.036067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.545 [2024-07-12 13:32:55.036086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:55.036142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.545 [2024-07-12 13:32:55.036161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:55.036276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.545 [2024-07-12 13:32:55.036296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.545 #13 NEW cov: 12191 ft: 13317 corp: 6/402b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 ChangeBinInt- 00:07:06.545 [2024-07-12 13:32:55.106147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.545 [2024-07-12 13:32:55.106179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:55.106285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.545 [2024-07-12 13:32:55.106305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:55.106373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.545 [2024-07-12 13:32:55.106394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.545 [2024-07-12 13:32:55.106505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.545 [2024-07-12 13:32:55.106522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.806 #14 NEW cov: 12191 ft: 13444 corp: 7/483b lim: 85 exec/s: 0 rss: 70Mb L: 81/81 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:06.806 [2024-07-12 13:32:55.166676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.806 [2024-07-12 13:32:55.166706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.166819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.806 [2024-07-12 13:32:55.166842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.166903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.806 [2024-07-12 13:32:55.166922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.167036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.806 [2024-07-12 13:32:55.167055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.167167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:06.806 [2024-07-12 13:32:55.167183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.806 #15 NEW cov: 12191 ft: 13515 corp: 8/568b lim: 85 exec/s: 0 rss: 70Mb L: 85/85 MS: 1 CopyPart- 00:07:06.806 [2024-07-12 13:32:55.246986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.806 [2024-07-12 13:32:55.247014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.247119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.806 [2024-07-12 13:32:55.247138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.247234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.806 [2024-07-12 13:32:55.247249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.247364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.806 [2024-07-12 13:32:55.247381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.247503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:06.806 [2024-07-12 13:32:55.247524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.806 #21 NEW cov: 12191 ft: 13590 corp: 9/653b lim: 85 exec/s: 0 rss: 70Mb L: 85/85 MS: 1 ChangeBinInt- 00:07:06.806 [2024-07-12 13:32:55.317235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.806 [2024-07-12 13:32:55.317260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.806 [2024-07-12 13:32:55.317380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.807 [2024-07-12 13:32:55.317400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.807 [2024-07-12 13:32:55.317486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.807 [2024-07-12 13:32:55.317502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.807 [2024-07-12 13:32:55.317619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.807 [2024-07-12 13:32:55.317637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.807 [2024-07-12 13:32:55.317752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:06.807 [2024-07-12 13:32:55.317773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.807 #22 NEW cov: 12191 ft: 13630 corp: 10/738b lim: 85 exec/s: 0 rss: 70Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:07:06.807 [2024-07-12 13:32:55.377443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:06.807 [2024-07-12 13:32:55.377474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.807 [2024-07-12 13:32:55.377594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:06.807 [2024-07-12 13:32:55.377615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.807 [2024-07-12 13:32:55.377706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:06.807 [2024-07-12 13:32:55.377723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.807 [2024-07-12 13:32:55.377844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:06.807 [2024-07-12 13:32:55.377866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.807 [2024-07-12 13:32:55.377988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:06.807 [2024-07-12 13:32:55.378005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.068 #23 NEW cov: 12191 ft: 13732 corp: 11/823b lim: 85 exec/s: 0 rss: 70Mb L: 85/85 MS: 1 ShuffleBytes- 00:07:07.068 [2024-07-12 13:32:55.457233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.068 [2024-07-12 13:32:55.457266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.457379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.068 [2024-07-12 13:32:55.457400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.457460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.068 [2024-07-12 13:32:55.457477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.457598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.068 [2024-07-12 13:32:55.457616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.068 #24 NEW cov: 12191 ft: 13805 corp: 12/904b lim: 85 exec/s: 0 rss: 70Mb L: 81/85 MS: 1 ChangeByte- 00:07:07.068 [2024-07-12 13:32:55.517826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.068 [2024-07-12 13:32:55.517858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.517979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.068 [2024-07-12 13:32:55.517996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.518079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.068 [2024-07-12 13:32:55.518095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.518210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.068 [2024-07-12 13:32:55.518234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.518354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:07.068 [2024-07-12 13:32:55.518373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.068 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:07.068 #25 NEW cov: 12214 ft: 13837 corp: 13/989b lim: 85 exec/s: 0 rss: 72Mb L: 85/85 MS: 1 ChangeByte- 00:07:07.068 [2024-07-12 13:32:55.577611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.068 [2024-07-12 13:32:55.577645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.577742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.068 [2024-07-12 13:32:55.577762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.577810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.068 [2024-07-12 13:32:55.577829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.068 [2024-07-12 13:32:55.577941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.068 [2024-07-12 13:32:55.577958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.068 #26 NEW cov: 12214 ft: 13844 corp: 14/1066b lim: 85 exec/s: 26 rss: 72Mb L: 77/85 MS: 1 ChangeBit- 00:07:07.329 [2024-07-12 13:32:55.658011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.329 [2024-07-12 13:32:55.658045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.329 [2024-07-12 13:32:55.658144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.329 [2024-07-12 13:32:55.658165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.329 [2024-07-12 13:32:55.658216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.329 [2024-07-12 13:32:55.658235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.329 [2024-07-12 13:32:55.658364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.329 [2024-07-12 13:32:55.658382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.329 #27 NEW cov: 12214 ft: 13864 corp: 15/1148b lim: 85 exec/s: 27 rss: 72Mb L: 82/85 MS: 1 CopyPart- 00:07:07.329 [2024-07-12 13:32:55.718563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.329 [2024-07-12 13:32:55.718594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.329 [2024-07-12 13:32:55.718707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.329 [2024-07-12 13:32:55.718725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.329 [2024-07-12 13:32:55.718794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.329 [2024-07-12 13:32:55.718812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.329 [2024-07-12 13:32:55.718924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.329 [2024-07-12 13:32:55.718941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.329 [2024-07-12 13:32:55.719056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:07.329 [2024-07-12 13:32:55.719075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.329 #28 NEW cov: 12214 ft: 13884 corp: 16/1233b lim: 85 exec/s: 28 rss: 72Mb L: 85/85 MS: 1 ChangeBit- 00:07:07.329 [2024-07-12 13:32:55.778842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.329 [2024-07-12 13:32:55.778874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.778983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.330 [2024-07-12 13:32:55.779005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.779064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.330 [2024-07-12 13:32:55.779082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.779191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.330 [2024-07-12 13:32:55.779211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.779326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:07.330 [2024-07-12 13:32:55.779346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.330 #29 NEW cov: 12214 ft: 13963 corp: 17/1318b lim: 85 exec/s: 29 rss: 72Mb L: 85/85 MS: 1 ChangeByte- 00:07:07.330 [2024-07-12 13:32:55.838705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.330 [2024-07-12 13:32:55.838739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.838846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.330 [2024-07-12 13:32:55.838866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.838924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.330 [2024-07-12 13:32:55.838941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.839056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.330 [2024-07-12 13:32:55.839076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.330 #30 NEW cov: 12214 ft: 14013 corp: 18/1399b lim: 85 exec/s: 30 rss: 72Mb L: 81/85 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:07.330 [2024-07-12 13:32:55.908916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.330 [2024-07-12 13:32:55.908948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.909054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.330 [2024-07-12 13:32:55.909075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.909123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.330 [2024-07-12 13:32:55.909141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.330 [2024-07-12 13:32:55.909261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.330 [2024-07-12 13:32:55.909283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.591 #36 NEW cov: 12214 ft: 14029 corp: 19/1480b lim: 85 exec/s: 36 rss: 72Mb L: 81/85 MS: 1 ChangeByte- 00:07:07.591 [2024-07-12 13:32:55.969153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.591 [2024-07-12 13:32:55.969183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:55.969299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.591 [2024-07-12 13:32:55.969322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:55.969387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.591 [2024-07-12 13:32:55.969406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:55.969517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.591 [2024-07-12 13:32:55.969535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.591 #37 NEW cov: 12214 ft: 14039 corp: 20/1561b lim: 85 exec/s: 37 rss: 72Mb L: 81/85 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:07.591 [2024-07-12 13:32:56.039776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.591 [2024-07-12 13:32:56.039809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.039921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.591 [2024-07-12 13:32:56.039944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.040029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.591 [2024-07-12 13:32:56.040046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.040155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.591 [2024-07-12 13:32:56.040172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.040301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:07.591 [2024-07-12 13:32:56.040319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.109394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.591 [2024-07-12 13:32:56.109424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.109524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.591 [2024-07-12 13:32:56.109545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.109594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.591 [2024-07-12 13:32:56.109614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.591 #39 NEW cov: 12214 ft: 14399 corp: 21/1620b lim: 85 exec/s: 39 rss: 72Mb L: 59/85 MS: 2 ChangeBit-EraseBytes- 00:07:07.591 [2024-07-12 13:32:56.169994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.591 [2024-07-12 13:32:56.170025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.170132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.591 [2024-07-12 13:32:56.170153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.170215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.591 [2024-07-12 13:32:56.170236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.591 [2024-07-12 13:32:56.170357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.591 [2024-07-12 13:32:56.170375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.853 #40 NEW cov: 12214 ft: 14429 corp: 22/1697b lim: 85 exec/s: 40 rss: 72Mb L: 77/85 MS: 1 CrossOver- 00:07:07.853 [2024-07-12 13:32:56.240682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.854 [2024-07-12 13:32:56.240714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.240842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.854 [2024-07-12 13:32:56.240865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.240952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.854 [2024-07-12 13:32:56.240970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.241087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.854 [2024-07-12 13:32:56.241104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.241214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:07.854 [2024-07-12 13:32:56.241238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.854 #41 NEW cov: 12214 ft: 14431 corp: 23/1782b lim: 85 exec/s: 41 rss: 72Mb L: 85/85 MS: 1 CrossOver- 00:07:07.854 [2024-07-12 13:32:56.321059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.854 [2024-07-12 13:32:56.321090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.321196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.854 [2024-07-12 13:32:56.321214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.321306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.854 [2024-07-12 13:32:56.321324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.321438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.854 [2024-07-12 13:32:56.321457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.321575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:07.854 [2024-07-12 13:32:56.321594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.854 #47 NEW cov: 12214 ft: 14456 corp: 24/1867b lim: 85 exec/s: 47 rss: 72Mb L: 85/85 MS: 1 ShuffleBytes- 00:07:07.854 [2024-07-12 13:32:56.390866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:07.854 [2024-07-12 13:32:56.390896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.390998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:07.854 [2024-07-12 13:32:56.391016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.391095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:07.854 [2024-07-12 13:32:56.391112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.854 [2024-07-12 13:32:56.391237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:07.854 [2024-07-12 13:32:56.391254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.854 #48 NEW cov: 12214 ft: 14466 corp: 25/1943b lim: 85 exec/s: 48 rss: 72Mb L: 76/85 MS: 1 EraseBytes- 00:07:08.115 [2024-07-12 13:32:56.451427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.115 [2024-07-12 13:32:56.451459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.451578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.115 [2024-07-12 13:32:56.451599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.451682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:08.115 [2024-07-12 13:32:56.451702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.451809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:08.115 [2024-07-12 13:32:56.451828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.451950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:08.115 [2024-07-12 13:32:56.451968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.115 #49 NEW cov: 12214 ft: 14474 corp: 26/2028b lim: 85 exec/s: 49 rss: 72Mb L: 85/85 MS: 1 ChangeByte- 00:07:08.115 [2024-07-12 13:32:56.511320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.115 [2024-07-12 13:32:56.511348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.511458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.115 [2024-07-12 13:32:56.511479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.511533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:08.115 [2024-07-12 13:32:56.511551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.511672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:08.115 [2024-07-12 13:32:56.511692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.115 #50 NEW cov: 12214 ft: 14483 corp: 27/2106b lim: 85 exec/s: 50 rss: 72Mb L: 78/85 MS: 1 InsertByte- 00:07:08.115 [2024-07-12 13:32:56.571867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.115 [2024-07-12 13:32:56.571894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.572031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.115 [2024-07-12 13:32:56.572052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.572147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:08.115 [2024-07-12 13:32:56.572162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.572274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:08.115 [2024-07-12 13:32:56.572297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.572408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:08.115 [2024-07-12 13:32:56.572427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:08.115 #51 NEW cov: 12214 ft: 14550 corp: 28/2191b lim: 85 exec/s: 51 rss: 72Mb L: 85/85 MS: 1 CopyPart- 00:07:08.115 [2024-07-12 13:32:56.631537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.115 [2024-07-12 13:32:56.631568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.631666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:08.115 [2024-07-12 13:32:56.631684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.115 [2024-07-12 13:32:56.631750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:08.115 [2024-07-12 13:32:56.631768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.115 #52 NEW cov: 12214 ft: 14617 corp: 29/2254b lim: 85 exec/s: 26 rss: 72Mb L: 63/85 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:08.115 #52 DONE cov: 12214 ft: 14617 corp: 29/2254b lim: 85 exec/s: 26 rss: 72Mb 00:07:08.115 ###### Recommended dictionary. ###### 00:07:08.115 "\377\377\377\377" # Uses: 5 00:07:08.115 ###### End of recommended dictionary. ###### 00:07:08.115 Done 52 runs in 2 second(s) 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.376 13:32:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:08.376 [2024-07-12 13:32:56.814721] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:08.376 [2024-07-12 13:32:56.814818] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449022 ] 00:07:08.376 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.637 [2024-07-12 13:32:56.969162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.637 [2024-07-12 13:32:57.020413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.637 [2024-07-12 13:32:57.081831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.637 [2024-07-12 13:32:57.098134] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:08.637 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.637 INFO: Seed: 4214056508 00:07:08.637 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:08.637 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:08.637 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:08.637 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.637 #2 INITED exec/s: 0 rss: 63Mb 00:07:08.637 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.637 This may also happen if the target rejected all inputs we tried so far 00:07:08.637 [2024-07-12 13:32:57.153074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:08.637 [2024-07-12 13:32:57.153107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.896 NEW_FUNC[1/695]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:08.897 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:08.897 #4 NEW cov: 11894 ft: 11904 corp: 2/6b lim: 25 exec/s: 0 rss: 69Mb L: 5/5 MS: 2 ChangeBit-CMP- DE: "\377\377\377["- 00:07:08.897 [2024-07-12 13:32:57.333713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:08.897 [2024-07-12 13:32:57.333774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.897 NEW_FUNC[1/1]: 0x12f1d50 in nvmf_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/nvmf.c:150 00:07:08.897 #5 NEW cov: 12033 ft: 12637 corp: 3/12b lim: 25 exec/s: 0 rss: 69Mb L: 6/6 MS: 1 InsertByte- 00:07:08.897 [2024-07-12 13:32:57.403591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:08.897 [2024-07-12 13:32:57.403620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.897 #11 NEW cov: 12039 ft: 12788 corp: 4/18b lim: 25 exec/s: 0 rss: 69Mb L: 6/6 MS: 1 CrossOver- 00:07:08.897 [2024-07-12 13:32:57.443666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:08.897 [2024-07-12 13:32:57.443692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.157 #12 NEW cov: 12124 ft: 13082 corp: 5/25b lim: 25 exec/s: 0 rss: 69Mb L: 7/7 MS: 1 InsertByte- 00:07:09.157 [2024-07-12 13:32:57.503859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.157 [2024-07-12 13:32:57.503885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.157 #13 NEW cov: 12124 ft: 13213 corp: 6/31b lim: 25 exec/s: 0 rss: 69Mb L: 6/7 MS: 1 CrossOver- 00:07:09.157 [2024-07-12 13:32:57.563963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.157 [2024-07-12 13:32:57.563989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.157 #14 NEW cov: 12124 ft: 13249 corp: 7/37b lim: 25 exec/s: 0 rss: 69Mb L: 6/7 MS: 1 ChangeBit- 00:07:09.157 [2024-07-12 13:32:57.604104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.157 [2024-07-12 13:32:57.604130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.157 #15 NEW cov: 12124 ft: 13309 corp: 8/45b lim: 25 exec/s: 0 rss: 69Mb L: 8/8 MS: 1 InsertByte- 00:07:09.157 [2024-07-12 13:32:57.664263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.157 [2024-07-12 13:32:57.664289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.157 #16 NEW cov: 12124 ft: 13376 corp: 9/52b lim: 25 exec/s: 0 rss: 69Mb L: 7/8 MS: 1 ChangeByte- 00:07:09.157 [2024-07-12 13:32:57.704358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.157 [2024-07-12 13:32:57.704383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.157 #17 NEW cov: 12124 ft: 13401 corp: 10/59b lim: 25 exec/s: 0 rss: 69Mb L: 7/8 MS: 1 ChangeByte- 00:07:09.417 [2024-07-12 13:32:57.744448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.417 [2024-07-12 13:32:57.744474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.417 #18 NEW cov: 12124 ft: 13464 corp: 11/67b lim: 25 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ChangeByte- 00:07:09.417 [2024-07-12 13:32:57.804716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.417 [2024-07-12 13:32:57.804741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.417 [2024-07-12 13:32:57.804776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:09.417 [2024-07-12 13:32:57.804790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.417 #19 NEW cov: 12124 ft: 13860 corp: 12/80b lim: 25 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CrossOver- 00:07:09.417 [2024-07-12 13:32:57.854762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.417 [2024-07-12 13:32:57.854786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.417 #20 NEW cov: 12124 ft: 13931 corp: 13/88b lim: 25 exec/s: 0 rss: 70Mb L: 8/13 MS: 1 CrossOver- 00:07:09.417 [2024-07-12 13:32:57.915015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.417 [2024-07-12 13:32:57.915039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.417 [2024-07-12 13:32:57.915084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:09.417 [2024-07-12 13:32:57.915094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.417 #21 NEW cov: 12124 ft: 13957 corp: 14/102b lim: 25 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 InsertByte- 00:07:09.417 [2024-07-12 13:32:57.975159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.417 [2024-07-12 13:32:57.975182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.417 [2024-07-12 13:32:57.975223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:09.417 [2024-07-12 13:32:57.975237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.678 #22 NEW cov: 12124 ft: 13972 corp: 15/113b lim: 25 exec/s: 0 rss: 70Mb L: 11/14 MS: 1 CopyPart- 00:07:09.678 [2024-07-12 13:32:58.025203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.678 [2024-07-12 13:32:58.025233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.678 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:09.678 #23 NEW cov: 12147 ft: 14010 corp: 16/121b lim: 25 exec/s: 0 rss: 70Mb L: 8/14 MS: 1 InsertByte- 00:07:09.678 [2024-07-12 13:32:58.085713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.678 [2024-07-12 13:32:58.085741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.678 [2024-07-12 13:32:58.085780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:09.678 [2024-07-12 13:32:58.085792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.678 [2024-07-12 13:32:58.085826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:09.678 [2024-07-12 13:32:58.085837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.678 [2024-07-12 13:32:58.085880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:09.678 [2024-07-12 13:32:58.085892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.678 [2024-07-12 13:32:58.085932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:09.678 [2024-07-12 13:32:58.085944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:09.678 #24 NEW cov: 12147 ft: 14536 corp: 17/146b lim: 25 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:09.678 [2024-07-12 13:32:58.145517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.678 [2024-07-12 13:32:58.145543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.678 #25 NEW cov: 12147 ft: 14589 corp: 18/154b lim: 25 exec/s: 25 rss: 70Mb L: 8/25 MS: 1 ChangeByte- 00:07:09.678 [2024-07-12 13:32:58.205682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.678 [2024-07-12 13:32:58.205709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.678 #26 NEW cov: 12147 ft: 14596 corp: 19/162b lim: 25 exec/s: 26 rss: 70Mb L: 8/25 MS: 1 ChangeBit- 00:07:09.678 [2024-07-12 13:32:58.245775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.678 [2024-07-12 13:32:58.245801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.938 #27 NEW cov: 12147 ft: 14629 corp: 20/171b lim: 25 exec/s: 27 rss: 70Mb L: 9/25 MS: 1 InsertByte- 00:07:09.938 [2024-07-12 13:32:58.285964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.938 [2024-07-12 13:32:58.285988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.938 [2024-07-12 13:32:58.286031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:09.938 [2024-07-12 13:32:58.286040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.938 #28 NEW cov: 12147 ft: 14648 corp: 21/183b lim: 25 exec/s: 28 rss: 70Mb L: 12/25 MS: 1 PersAutoDict- DE: "\377\377\377["- 00:07:09.938 [2024-07-12 13:32:58.346129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.938 [2024-07-12 13:32:58.346153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.938 [2024-07-12 13:32:58.346195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:09.938 [2024-07-12 13:32:58.346205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.938 #29 NEW cov: 12147 ft: 14691 corp: 22/194b lim: 25 exec/s: 29 rss: 70Mb L: 11/25 MS: 1 InsertRepeatedBytes- 00:07:09.938 [2024-07-12 13:32:58.396167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.938 [2024-07-12 13:32:58.396190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.938 #30 NEW cov: 12147 ft: 14697 corp: 23/203b lim: 25 exec/s: 30 rss: 70Mb L: 9/25 MS: 1 InsertByte- 00:07:09.938 [2024-07-12 13:32:58.436287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.938 [2024-07-12 13:32:58.436311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.938 #31 NEW cov: 12147 ft: 14714 corp: 24/212b lim: 25 exec/s: 31 rss: 70Mb L: 9/25 MS: 1 InsertByte- 00:07:09.938 [2024-07-12 13:32:58.496448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:09.938 [2024-07-12 13:32:58.496471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.199 #32 NEW cov: 12147 ft: 14721 corp: 25/221b lim: 25 exec/s: 32 rss: 70Mb L: 9/25 MS: 1 CrossOver- 00:07:10.199 [2024-07-12 13:32:58.557005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.199 [2024-07-12 13:32:58.557029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.557072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:10.199 [2024-07-12 13:32:58.557083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.557116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:10.199 [2024-07-12 13:32:58.557127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.557167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:10.199 [2024-07-12 13:32:58.557178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.557219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:10.199 [2024-07-12 13:32:58.557235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:10.199 #33 NEW cov: 12147 ft: 14739 corp: 26/246b lim: 25 exec/s: 33 rss: 72Mb L: 25/25 MS: 1 ChangeBit- 00:07:10.199 [2024-07-12 13:32:58.617118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.199 [2024-07-12 13:32:58.617142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.617180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:10.199 [2024-07-12 13:32:58.617192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.617211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:10.199 [2024-07-12 13:32:58.617222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.617267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:10.199 [2024-07-12 13:32:58.617279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.617320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:10.199 [2024-07-12 13:32:58.617331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:10.199 #34 NEW cov: 12147 ft: 14794 corp: 27/271b lim: 25 exec/s: 34 rss: 72Mb L: 25/25 MS: 1 CrossOver- 00:07:10.199 [2024-07-12 13:32:58.676929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.199 [2024-07-12 13:32:58.676953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.199 #35 NEW cov: 12147 ft: 14827 corp: 28/277b lim: 25 exec/s: 35 rss: 72Mb L: 6/25 MS: 1 EraseBytes- 00:07:10.199 [2024-07-12 13:32:58.737086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.199 [2024-07-12 13:32:58.737110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.199 #36 NEW cov: 12147 ft: 14838 corp: 29/283b lim: 25 exec/s: 36 rss: 72Mb L: 6/25 MS: 1 ChangeBinInt- 00:07:10.199 [2024-07-12 13:32:58.777271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.199 [2024-07-12 13:32:58.777296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.199 [2024-07-12 13:32:58.777337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:10.199 [2024-07-12 13:32:58.777352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.459 #42 NEW cov: 12147 ft: 14862 corp: 30/295b lim: 25 exec/s: 42 rss: 72Mb L: 12/25 MS: 1 CMP- DE: "\377&\033\021=]>\243"- 00:07:10.459 [2024-07-12 13:32:58.837364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.459 [2024-07-12 13:32:58.837389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.459 #43 NEW cov: 12147 ft: 14873 corp: 31/301b lim: 25 exec/s: 43 rss: 72Mb L: 6/25 MS: 1 ShuffleBytes- 00:07:10.459 [2024-07-12 13:32:58.897509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.459 [2024-07-12 13:32:58.897534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.459 #44 NEW cov: 12147 ft: 14888 corp: 32/306b lim: 25 exec/s: 44 rss: 72Mb L: 5/25 MS: 1 ChangeBinInt- 00:07:10.459 [2024-07-12 13:32:58.947720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.459 [2024-07-12 13:32:58.947745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.459 [2024-07-12 13:32:58.947788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:10.459 [2024-07-12 13:32:58.947798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.459 #45 NEW cov: 12147 ft: 14915 corp: 33/317b lim: 25 exec/s: 45 rss: 72Mb L: 11/25 MS: 1 ShuffleBytes- 00:07:10.459 [2024-07-12 13:32:59.007892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.459 [2024-07-12 13:32:59.007917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.459 [2024-07-12 13:32:59.007958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:10.459 [2024-07-12 13:32:59.007968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.459 #46 NEW cov: 12147 ft: 14917 corp: 34/328b lim: 25 exec/s: 46 rss: 72Mb L: 11/25 MS: 1 ChangeByte- 00:07:10.719 [2024-07-12 13:32:59.058213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.720 [2024-07-12 13:32:59.058243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.720 [2024-07-12 13:32:59.058281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:10.720 [2024-07-12 13:32:59.058293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.720 [2024-07-12 13:32:59.058323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:10.720 [2024-07-12 13:32:59.058336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.720 [2024-07-12 13:32:59.058376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:10.720 [2024-07-12 13:32:59.058388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.720 #47 NEW cov: 12147 ft: 14938 corp: 35/348b lim: 25 exec/s: 47 rss: 72Mb L: 20/25 MS: 1 InsertRepeatedBytes- 00:07:10.720 [2024-07-12 13:32:59.118096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:10.720 [2024-07-12 13:32:59.118120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.720 #48 NEW cov: 12147 ft: 14955 corp: 36/355b lim: 25 exec/s: 24 rss: 72Mb L: 7/25 MS: 1 CMP- DE: "\377\377"- 00:07:10.720 #48 DONE cov: 12147 ft: 14955 corp: 36/355b lim: 25 exec/s: 24 rss: 72Mb 00:07:10.720 ###### Recommended dictionary. ###### 00:07:10.720 "\377\377\377[" # Uses: 1 00:07:10.720 "\377&\033\021=]>\243" # Uses: 0 00:07:10.720 "\377\377" # Uses: 0 00:07:10.720 ###### End of recommended dictionary. ###### 00:07:10.720 Done 48 runs in 2 second(s) 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:10.720 13:32:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:10.720 [2024-07-12 13:32:59.277969] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:10.720 [2024-07-12 13:32:59.278064] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449422 ] 00:07:10.980 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.980 [2024-07-12 13:32:59.441931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.980 [2024-07-12 13:32:59.498857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.980 [2024-07-12 13:32:59.560505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.240 [2024-07-12 13:32:59.576856] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:11.240 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.240 INFO: Seed: 2398094324 00:07:11.240 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:11.240 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:11.240 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:11.240 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.240 #2 INITED exec/s: 0 rss: 64Mb 00:07:11.240 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.240 This may also happen if the target rejected all inputs we tried so far 00:07:11.240 [2024-07-12 13:32:59.643352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.240 [2024-07-12 13:32:59.643396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.240 NEW_FUNC[1/693]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:11.240 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.240 #8 NEW cov: 11924 ft: 11967 corp: 2/25b lim: 100 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:11.500 [2024-07-12 13:32:59.834006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632839234567870 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.500 [2024-07-12 13:32:59.834055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.500 NEW_FUNC[1/4]: 0x17fb080 in nvme_get_transport /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:56 00:07:11.500 NEW_FUNC[2/4]: 0x1a77c30 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:546 00:07:11.500 #9 NEW cov: 12105 ft: 12461 corp: 3/49b lim: 100 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 ChangeBit- 00:07:11.500 [2024-07-12 13:32:59.915027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5497853137606429772 len:19533 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.500 [2024-07-12 13:32:59.915064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.500 [2024-07-12 13:32:59.915177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.500 [2024-07-12 13:32:59.915191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.500 #18 NEW cov: 12111 ft: 13599 corp: 4/92b lim: 100 exec/s: 0 rss: 70Mb L: 43/43 MS: 4 ChangeBit-CopyPart-CrossOver-InsertRepeatedBytes- 00:07:11.500 [2024-07-12 13:32:59.974805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.500 [2024-07-12 13:32:59.974837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.500 #19 NEW cov: 12196 ft: 13864 corp: 5/116b lim: 100 exec/s: 0 rss: 70Mb L: 24/43 MS: 1 ChangeBit- 00:07:11.500 [2024-07-12 13:33:00.035645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744423115981504190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.500 [2024-07-12 13:33:00.035680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.500 [2024-07-12 13:33:00.035792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.500 [2024-07-12 13:33:00.035807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.500 #20 NEW cov: 12196 ft: 13938 corp: 6/168b lim: 100 exec/s: 0 rss: 70Mb L: 52/52 MS: 1 InsertRepeatedBytes- 00:07:11.762 [2024-07-12 13:33:00.095475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744630640211312318 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.762 [2024-07-12 13:33:00.095508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.762 #21 NEW cov: 12196 ft: 14038 corp: 7/192b lim: 100 exec/s: 0 rss: 70Mb L: 24/52 MS: 1 ChangeBit- 00:07:11.762 [2024-07-12 13:33:00.165696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.762 [2024-07-12 13:33:00.165731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.762 #22 NEW cov: 12196 ft: 14090 corp: 8/216b lim: 100 exec/s: 0 rss: 70Mb L: 24/52 MS: 1 CrossOver- 00:07:11.762 [2024-07-12 13:33:00.235866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632839821770430 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.762 [2024-07-12 13:33:00.235898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.762 #23 NEW cov: 12196 ft: 14195 corp: 9/240b lim: 100 exec/s: 0 rss: 70Mb L: 24/52 MS: 1 ChangeByte- 00:07:11.762 [2024-07-12 13:33:00.296055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632839234567838 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.762 [2024-07-12 13:33:00.296093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.762 #24 NEW cov: 12196 ft: 14290 corp: 10/264b lim: 100 exec/s: 0 rss: 70Mb L: 24/52 MS: 1 ShuffleBytes- 00:07:12.023 [2024-07-12 13:33:00.367045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.367074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.023 [2024-07-12 13:33:00.367178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.367197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.023 [2024-07-12 13:33:00.367272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.367292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.023 #25 NEW cov: 12196 ft: 14677 corp: 11/331b lim: 100 exec/s: 0 rss: 70Mb L: 67/67 MS: 1 InsertRepeatedBytes- 00:07:12.023 [2024-07-12 13:33:00.426815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744630640201285310 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.426848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.023 #26 NEW cov: 12196 ft: 14736 corp: 12/355b lim: 100 exec/s: 0 rss: 72Mb L: 24/67 MS: 1 ChangeByte- 00:07:12.023 [2024-07-12 13:33:00.497500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.497531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.023 [2024-07-12 13:33:00.497641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.497656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.023 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:12.023 #29 NEW cov: 12219 ft: 14799 corp: 13/400b lim: 100 exec/s: 0 rss: 72Mb L: 45/67 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:07:12.023 [2024-07-12 13:33:00.557688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.557720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.023 [2024-07-12 13:33:00.557842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.023 [2024-07-12 13:33:00.557857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.023 #30 NEW cov: 12219 ft: 14826 corp: 14/446b lim: 100 exec/s: 0 rss: 72Mb L: 46/67 MS: 1 InsertByte- 00:07:12.285 [2024-07-12 13:33:00.638252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.638287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.285 [2024-07-12 13:33:00.638384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.638403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.285 [2024-07-12 13:33:00.638453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.638472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.285 #31 NEW cov: 12219 ft: 14840 corp: 15/513b lim: 100 exec/s: 31 rss: 72Mb L: 67/67 MS: 1 ChangeByte- 00:07:12.285 [2024-07-12 13:33:00.717604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.717635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.285 [2024-07-12 13:33:00.717749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.717764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.285 #32 NEW cov: 12219 ft: 14849 corp: 16/557b lim: 100 exec/s: 32 rss: 72Mb L: 44/67 MS: 1 EraseBytes- 00:07:12.285 [2024-07-12 13:33:00.798751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.798784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.285 [2024-07-12 13:33:00.798891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.798910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.285 [2024-07-12 13:33:00.799004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.285 [2024-07-12 13:33:00.799023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.285 #38 NEW cov: 12219 ft: 14856 corp: 17/624b lim: 100 exec/s: 38 rss: 72Mb L: 67/67 MS: 1 ChangeByte- 00:07:12.547 [2024-07-12 13:33:00.878253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744630640201250881 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.547 [2024-07-12 13:33:00.878284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.547 #39 NEW cov: 12219 ft: 14883 corp: 18/648b lim: 100 exec/s: 39 rss: 72Mb L: 24/67 MS: 1 ChangeBinInt- 00:07:12.547 [2024-07-12 13:33:00.948423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.547 [2024-07-12 13:33:00.948461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.547 #40 NEW cov: 12219 ft: 14901 corp: 19/683b lim: 100 exec/s: 40 rss: 72Mb L: 35/67 MS: 1 EraseBytes- 00:07:12.547 [2024-07-12 13:33:01.019519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.547 [2024-07-12 13:33:01.019550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.547 [2024-07-12 13:33:01.019646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.547 [2024-07-12 13:33:01.019665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.547 [2024-07-12 13:33:01.019716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.547 [2024-07-12 13:33:01.019736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.547 #41 NEW cov: 12219 ft: 14911 corp: 20/753b lim: 100 exec/s: 41 rss: 72Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:07:12.547 [2024-07-12 13:33:01.078949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632836212571838 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.547 [2024-07-12 13:33:01.078982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.547 #42 NEW cov: 12219 ft: 14927 corp: 21/774b lim: 100 exec/s: 42 rss: 72Mb L: 21/70 MS: 1 CrossOver- 00:07:12.809 [2024-07-12 13:33:01.139457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.139492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.809 [2024-07-12 13:33:01.139602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.139618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.809 #43 NEW cov: 12219 ft: 14939 corp: 22/818b lim: 100 exec/s: 43 rss: 72Mb L: 44/70 MS: 1 ChangeByte- 00:07:12.809 [2024-07-12 13:33:01.220120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1446803459087973908 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.220152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.809 [2024-07-12 13:33:01.220251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1446803456761533460 len:5141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.220270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.809 [2024-07-12 13:33:01.220338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13744632836371256340 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.220361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.809 #49 NEW cov: 12219 ft: 14952 corp: 23/894b lim: 100 exec/s: 49 rss: 73Mb L: 76/76 MS: 1 InsertRepeatedBytes- 00:07:12.809 [2024-07-12 13:33:01.299980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.300015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.809 [2024-07-12 13:33:01.300132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.300146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.809 #50 NEW cov: 12219 ft: 14966 corp: 24/947b lim: 100 exec/s: 50 rss: 73Mb L: 53/76 MS: 1 EraseBytes- 00:07:12.809 [2024-07-12 13:33:01.360146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.360178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.809 [2024-07-12 13:33:01.360287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.809 [2024-07-12 13:33:01.360308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.809 #51 NEW cov: 12219 ft: 14974 corp: 25/992b lim: 100 exec/s: 51 rss: 73Mb L: 45/76 MS: 1 ChangeByte- 00:07:13.071 [2024-07-12 13:33:01.420749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744423115981504190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.420782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.071 [2024-07-12 13:33:01.420876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.420898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.071 [2024-07-12 13:33:01.420931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13744632836034444990 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.420948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.071 #52 NEW cov: 12219 ft: 15017 corp: 26/1052b lim: 100 exec/s: 52 rss: 73Mb L: 60/76 MS: 1 InsertRepeatedBytes- 00:07:13.071 [2024-07-12 13:33:01.501328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.501362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.071 [2024-07-12 13:33:01.501464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.501481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.071 [2024-07-12 13:33:01.501541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.501560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.071 #53 NEW cov: 12219 ft: 15030 corp: 27/1119b lim: 100 exec/s: 53 rss: 73Mb L: 67/76 MS: 1 ChangeBit- 00:07:13.071 [2024-07-12 13:33:01.561066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744469475321626302 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.561098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.071 #54 NEW cov: 12219 ft: 15044 corp: 28/1153b lim: 100 exec/s: 54 rss: 73Mb L: 34/76 MS: 1 EraseBytes- 00:07:13.071 [2024-07-12 13:33:01.622402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13744632838697696958 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.622434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.071 [2024-07-12 13:33:01.622540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.622560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.071 [2024-07-12 13:33:01.622624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3038287259199220266 len:10795 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.622641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.071 [2024-07-12 13:33:01.622758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13600517015503552190 len:48831 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.071 [2024-07-12 13:33:01.622777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.332 #55 NEW cov: 12219 ft: 15430 corp: 29/1239b lim: 100 exec/s: 27 rss: 73Mb L: 86/86 MS: 1 CrossOver- 00:07:13.332 #55 DONE cov: 12219 ft: 15430 corp: 29/1239b lim: 100 exec/s: 27 rss: 73Mb 00:07:13.332 Done 55 runs in 2 second(s) 00:07:13.332 13:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.332 13:33:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.332 13:33:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.332 13:33:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:13.332 00:07:13.332 real 1m2.625s 00:07:13.332 user 1m43.929s 00:07:13.332 sys 0m6.056s 00:07:13.332 13:33:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.332 13:33:01 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:13.332 ************************************ 00:07:13.332 END TEST nvmf_llvm_fuzz 00:07:13.332 ************************************ 00:07:13.332 13:33:01 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:13.332 13:33:01 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:13.332 13:33:01 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:13.332 13:33:01 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:13.332 13:33:01 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.332 13:33:01 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.332 13:33:01 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:13.332 ************************************ 00:07:13.332 START TEST vfio_llvm_fuzz 00:07:13.332 ************************************ 00:07:13.332 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:13.332 * Looking for test storage... 00:07:13.596 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:13.596 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:13.597 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:13.597 #define SPDK_CONFIG_H 00:07:13.597 #define SPDK_CONFIG_APPS 1 00:07:13.597 #define SPDK_CONFIG_ARCH native 00:07:13.597 #undef SPDK_CONFIG_ASAN 00:07:13.597 #undef SPDK_CONFIG_AVAHI 00:07:13.597 #undef SPDK_CONFIG_CET 00:07:13.597 #define SPDK_CONFIG_COVERAGE 1 00:07:13.597 #define SPDK_CONFIG_CROSS_PREFIX 00:07:13.597 #undef SPDK_CONFIG_CRYPTO 00:07:13.597 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:13.597 #undef SPDK_CONFIG_CUSTOMOCF 00:07:13.597 #undef SPDK_CONFIG_DAOS 00:07:13.597 #define SPDK_CONFIG_DAOS_DIR 00:07:13.597 #define SPDK_CONFIG_DEBUG 1 00:07:13.597 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:13.597 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:13.597 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:13.597 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:13.597 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:13.597 #undef SPDK_CONFIG_DPDK_UADK 00:07:13.597 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:13.597 #define SPDK_CONFIG_EXAMPLES 1 00:07:13.597 #undef SPDK_CONFIG_FC 00:07:13.597 #define SPDK_CONFIG_FC_PATH 00:07:13.597 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:13.597 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:13.597 #undef SPDK_CONFIG_FUSE 00:07:13.597 #define SPDK_CONFIG_FUZZER 1 00:07:13.598 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:13.598 #undef SPDK_CONFIG_GOLANG 00:07:13.598 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:13.598 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:13.598 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:13.598 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:13.598 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:13.598 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:13.598 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:13.598 #define SPDK_CONFIG_IDXD 1 00:07:13.598 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:13.598 #undef SPDK_CONFIG_IPSEC_MB 00:07:13.598 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:13.598 #define SPDK_CONFIG_ISAL 1 00:07:13.598 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:13.598 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:13.598 #define SPDK_CONFIG_LIBDIR 00:07:13.598 #undef SPDK_CONFIG_LTO 00:07:13.598 #define SPDK_CONFIG_MAX_LCORES 128 00:07:13.598 #define SPDK_CONFIG_NVME_CUSE 1 00:07:13.598 #undef SPDK_CONFIG_OCF 00:07:13.598 #define SPDK_CONFIG_OCF_PATH 00:07:13.598 #define SPDK_CONFIG_OPENSSL_PATH 00:07:13.598 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:13.598 #define SPDK_CONFIG_PGO_DIR 00:07:13.598 #undef SPDK_CONFIG_PGO_USE 00:07:13.598 #define SPDK_CONFIG_PREFIX /usr/local 00:07:13.598 #undef SPDK_CONFIG_RAID5F 00:07:13.598 #undef SPDK_CONFIG_RBD 00:07:13.598 #define SPDK_CONFIG_RDMA 1 00:07:13.598 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:13.598 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:13.598 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:13.598 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:13.598 #undef SPDK_CONFIG_SHARED 00:07:13.598 #undef SPDK_CONFIG_SMA 00:07:13.598 #define SPDK_CONFIG_TESTS 1 00:07:13.598 #undef SPDK_CONFIG_TSAN 00:07:13.598 #define SPDK_CONFIG_UBLK 1 00:07:13.598 #define SPDK_CONFIG_UBSAN 1 00:07:13.598 #undef SPDK_CONFIG_UNIT_TESTS 00:07:13.598 #undef SPDK_CONFIG_URING 00:07:13.598 #define SPDK_CONFIG_URING_PATH 00:07:13.598 #undef SPDK_CONFIG_URING_ZNS 00:07:13.598 #undef SPDK_CONFIG_USDT 00:07:13.598 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:13.598 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:13.598 #define SPDK_CONFIG_VFIO_USER 1 00:07:13.598 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:13.598 #define SPDK_CONFIG_VHOST 1 00:07:13.598 #define SPDK_CONFIG_VIRTIO 1 00:07:13.598 #undef SPDK_CONFIG_VTUNE 00:07:13.598 #define SPDK_CONFIG_VTUNE_DIR 00:07:13.598 #define SPDK_CONFIG_WERROR 1 00:07:13.598 #define SPDK_CONFIG_WPDK_DIR 00:07:13.598 #undef SPDK_CONFIG_XNVME 00:07:13.598 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:13.598 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:13.599 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:13.600 13:33:01 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 2450180 ]] 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 2450180 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # set_test_storage 2147483648 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.mm6An2 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.mm6An2/tests/vfio /tmp/spdk.mm6An2 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=121089982464 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=8280997888 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=25867657216 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6541312 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=64684015616 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1474560 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:13.600 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:13.601 * Looking for test storage... 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=121089982464 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10495590400 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:13.601 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1709 -- # set -o errtrace 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # shopt -s extdebug 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1713 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1714 -- # true 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1716 -- # xtrace_fd 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:13.601 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:13.601 13:33:02 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:13.601 [2024-07-12 13:33:02.121016] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:13.601 [2024-07-12 13:33:02.121100] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450220 ] 00:07:13.601 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.862 [2024-07-12 13:33:02.192522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.862 [2024-07-12 13:33:02.267985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.862 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.862 INFO: Seed: 945126060 00:07:14.122 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296d90c, 0x29c43f5), 00:07:14.122 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c43f8,0x2f2f288), 00:07:14.122 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:14.122 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.122 #2 INITED exec/s: 0 rss: 65Mb 00:07:14.122 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.122 This may also happen if the target rejected all inputs we tried so far 00:07:14.122 [2024-07-12 13:33:02.492118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:14.383 NEW_FUNC[1/658]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:14.383 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:14.383 #14 NEW cov: 10960 ft: 10769 corp: 2/7b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:14.654 #19 NEW cov: 10974 ft: 14142 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 5 ChangeByte-ShuffleBytes-InsertRepeatedBytes-ChangeByte-InsertByte- 00:07:14.654 #21 NEW cov: 10974 ft: 15980 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 2 EraseBytes-InsertByte- 00:07:14.915 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:14.915 #22 NEW cov: 10991 ft: 16271 corp: 5/25b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:15.174 #26 NEW cov: 10991 ft: 16756 corp: 6/31b lim: 6 exec/s: 26 rss: 73Mb L: 6/6 MS: 4 ChangeBit-InsertRepeatedBytes-ShuffleBytes-InsertByte- 00:07:15.174 #29 NEW cov: 10994 ft: 16830 corp: 7/37b lim: 6 exec/s: 29 rss: 73Mb L: 6/6 MS: 3 InsertByte-InsertByte-CopyPart- 00:07:15.435 #30 NEW cov: 10994 ft: 16987 corp: 8/43b lim: 6 exec/s: 30 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:07:15.695 #31 NEW cov: 10994 ft: 17425 corp: 9/49b lim: 6 exec/s: 31 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:15.955 #32 NEW cov: 11001 ft: 17607 corp: 10/55b lim: 6 exec/s: 32 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:15.955 #33 NEW cov: 11001 ft: 17654 corp: 11/61b lim: 6 exec/s: 16 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:15.955 #33 DONE cov: 11001 ft: 17654 corp: 11/61b lim: 6 exec/s: 16 rss: 73Mb 00:07:15.955 Done 33 runs in 2 second(s) 00:07:15.955 [2024-07-12 13:33:04.491406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:16.215 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:16.215 13:33:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:16.215 [2024-07-12 13:33:04.722567] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:16.215 [2024-07-12 13:33:04.722643] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450821 ] 00:07:16.215 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.215 [2024-07-12 13:33:04.789221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.475 [2024-07-12 13:33:04.856429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.475 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.475 INFO: Seed: 3530132279 00:07:16.475 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296d90c, 0x29c43f5), 00:07:16.475 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c43f8,0x2f2f288), 00:07:16.475 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:16.475 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.475 #2 INITED exec/s: 0 rss: 65Mb 00:07:16.475 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.475 This may also happen if the target rejected all inputs we tried so far 00:07:16.734 [2024-07-12 13:33:05.076500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:16.734 [2024-07-12 13:33:05.151015] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:16.734 [2024-07-12 13:33:05.151043] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:16.734 [2024-07-12 13:33:05.151168] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:16.993 NEW_FUNC[1/659]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:16.993 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:16.993 #42 NEW cov: 10952 ft: 10862 corp: 2/5b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 5 CrossOver-ChangeBit-ChangeBinInt-InsertByte-InsertByte- 00:07:16.993 [2024-07-12 13:33:05.436309] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:16.993 [2024-07-12 13:33:05.436345] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:16.993 [2024-07-12 13:33:05.436482] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:16.993 NEW_FUNC[1/1]: 0x1404580 in q_addr /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:553 00:07:16.993 #43 NEW cov: 10973 ft: 13976 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:07:17.252 [2024-07-12 13:33:05.625650] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:17.252 [2024-07-12 13:33:05.625672] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:17.252 [2024-07-12 13:33:05.625740] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:17.252 #49 NEW cov: 10973 ft: 15604 corp: 4/13b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:17.252 [2024-07-12 13:33:05.817619] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:17.252 [2024-07-12 13:33:05.817641] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:17.252 [2024-07-12 13:33:05.817708] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:17.512 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:17.512 #54 NEW cov: 10990 ft: 15899 corp: 5/17b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 5 EraseBytes-ChangeBit-InsertByte-ShuffleBytes-InsertByte- 00:07:17.512 [2024-07-12 13:33:06.010309] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:17.512 [2024-07-12 13:33:06.010332] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:17.512 [2024-07-12 13:33:06.010512] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:17.773 #57 NEW cov: 10990 ft: 16857 corp: 6/21b lim: 4 exec/s: 57 rss: 73Mb L: 4/4 MS: 3 EraseBytes-CrossOver-InsertByte- 00:07:17.773 [2024-07-12 13:33:06.206384] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:17.773 [2024-07-12 13:33:06.206406] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:17.773 [2024-07-12 13:33:06.206522] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:17.773 #58 NEW cov: 10990 ft: 16991 corp: 7/25b lim: 4 exec/s: 58 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:18.032 [2024-07-12 13:33:06.388485] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:18.032 [2024-07-12 13:33:06.388506] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:18.032 [2024-07-12 13:33:06.388579] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:18.032 #59 NEW cov: 10990 ft: 17025 corp: 8/29b lim: 4 exec/s: 59 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:18.032 [2024-07-12 13:33:06.571500] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:18.032 [2024-07-12 13:33:06.571521] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:18.032 [2024-07-12 13:33:06.571592] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:18.291 #60 NEW cov: 10990 ft: 17067 corp: 9/33b lim: 4 exec/s: 60 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:07:18.291 [2024-07-12 13:33:06.753323] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:18.291 [2024-07-12 13:33:06.753344] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:18.291 [2024-07-12 13:33:06.753492] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:18.291 #61 NEW cov: 10997 ft: 17461 corp: 10/37b lim: 4 exec/s: 61 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:18.549 [2024-07-12 13:33:06.948808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:18.549 [2024-07-12 13:33:06.948828] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:18.549 [2024-07-12 13:33:06.948899] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:18.549 #67 NEW cov: 10997 ft: 17622 corp: 11/41b lim: 4 exec/s: 33 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:07:18.549 #67 DONE cov: 10997 ft: 17622 corp: 11/41b lim: 4 exec/s: 33 rss: 73Mb 00:07:18.550 Done 67 runs in 2 second(s) 00:07:18.550 [2024-07-12 13:33:07.084411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:18.809 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:18.809 13:33:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:18.809 [2024-07-12 13:33:07.313870] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:18.809 [2024-07-12 13:33:07.313949] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451248 ] 00:07:18.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.809 [2024-07-12 13:33:07.381009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.068 [2024-07-12 13:33:07.448012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.068 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.068 INFO: Seed: 1828162195 00:07:19.068 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296d90c, 0x29c43f5), 00:07:19.068 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c43f8,0x2f2f288), 00:07:19.068 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:19.068 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.068 #2 INITED exec/s: 0 rss: 65Mb 00:07:19.068 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.068 This may also happen if the target rejected all inputs we tried so far 00:07:19.328 [2024-07-12 13:33:07.668560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:19.328 [2024-07-12 13:33:07.744888] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:19.589 NEW_FUNC[1/659]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:19.589 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:19.589 #31 NEW cov: 10939 ft: 10856 corp: 2/9b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 4 ChangeBinInt-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:07:19.589 [2024-07-12 13:33:08.076322] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:19.848 #32 NEW cov: 10953 ft: 13793 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 ChangeByte- 00:07:19.848 [2024-07-12 13:33:08.244200] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:19.848 #38 NEW cov: 10956 ft: 15450 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:19.848 [2024-07-12 13:33:08.415622] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:20.109 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:20.109 #39 NEW cov: 10973 ft: 16260 corp: 5/33b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:20.109 [2024-07-12 13:33:08.586093] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:20.109 #50 NEW cov: 10973 ft: 16372 corp: 6/41b lim: 8 exec/s: 50 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:20.368 [2024-07-12 13:33:08.759745] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:20.368 #56 NEW cov: 10973 ft: 16573 corp: 7/49b lim: 8 exec/s: 56 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:20.369 [2024-07-12 13:33:08.934362] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:20.629 #57 NEW cov: 10973 ft: 16615 corp: 8/57b lim: 8 exec/s: 57 rss: 73Mb L: 8/8 MS: 1 CrossOver- 00:07:20.629 [2024-07-12 13:33:09.091888] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:20.629 #58 NEW cov: 10973 ft: 16999 corp: 9/65b lim: 8 exec/s: 58 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:07:20.889 [2024-07-12 13:33:09.264404] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:20.889 #59 NEW cov: 10973 ft: 17051 corp: 10/73b lim: 8 exec/s: 59 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:20.889 [2024-07-12 13:33:09.437419] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:21.149 #60 NEW cov: 10980 ft: 17080 corp: 11/81b lim: 8 exec/s: 60 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:07:21.149 [2024-07-12 13:33:09.605695] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:21.149 #61 NEW cov: 10980 ft: 17106 corp: 12/89b lim: 8 exec/s: 30 rss: 74Mb L: 8/8 MS: 1 ChangeByte- 00:07:21.149 #61 DONE cov: 10980 ft: 17106 corp: 12/89b lim: 8 exec/s: 30 rss: 74Mb 00:07:21.149 Done 61 runs in 2 second(s) 00:07:21.149 [2024-07-12 13:33:09.721409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:21.410 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:21.410 13:33:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:21.410 [2024-07-12 13:33:09.955119] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:21.410 [2024-07-12 13:33:09.955212] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452274 ] 00:07:21.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.671 [2024-07-12 13:33:10.024449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.671 [2024-07-12 13:33:10.095856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.932 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.932 INFO: Seed: 177202920 00:07:21.932 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296d90c, 0x29c43f5), 00:07:21.932 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c43f8,0x2f2f288), 00:07:21.932 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:21.932 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.932 #2 INITED exec/s: 0 rss: 65Mb 00:07:21.932 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.932 This may also happen if the target rejected all inputs we tried so far 00:07:21.932 [2024-07-12 13:33:10.313778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:22.193 NEW_FUNC[1/659]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:22.193 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:22.193 #112 NEW cov: 10941 ft: 10801 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 5 CrossOver-InsertRepeatedBytes-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:07:22.454 #113 NEW cov: 10964 ft: 13809 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:07:22.454 #114 NEW cov: 10964 ft: 14877 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:22.714 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:22.714 #125 NEW cov: 10981 ft: 15892 corp: 5/129b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:07:22.974 #126 NEW cov: 10981 ft: 15966 corp: 6/161b lim: 32 exec/s: 126 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:22.974 #128 NEW cov: 10981 ft: 16437 corp: 7/193b lim: 32 exec/s: 128 rss: 73Mb L: 32/32 MS: 2 EraseBytes-CrossOver- 00:07:23.235 #130 NEW cov: 10981 ft: 16454 corp: 8/225b lim: 32 exec/s: 130 rss: 73Mb L: 32/32 MS: 2 EraseBytes-InsertByte- 00:07:23.541 #131 NEW cov: 10981 ft: 16476 corp: 9/257b lim: 32 exec/s: 131 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:23.541 #135 NEW cov: 10981 ft: 16497 corp: 10/289b lim: 32 exec/s: 135 rss: 73Mb L: 32/32 MS: 4 EraseBytes-ChangeBit-ChangeByte-CopyPart- 00:07:23.837 #139 NEW cov: 10988 ft: 16911 corp: 11/321b lim: 32 exec/s: 139 rss: 73Mb L: 32/32 MS: 4 EraseBytes-ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:07:24.104 #145 NEW cov: 10988 ft: 17304 corp: 12/353b lim: 32 exec/s: 72 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:07:24.104 #145 DONE cov: 10988 ft: 17304 corp: 12/353b lim: 32 exec/s: 72 rss: 73Mb 00:07:24.104 Done 145 runs in 2 second(s) 00:07:24.104 [2024-07-12 13:33:12.448406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:24.104 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:24.104 13:33:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:24.104 [2024-07-12 13:33:12.679342] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:24.104 [2024-07-12 13:33:12.679439] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452754 ] 00:07:24.366 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.366 [2024-07-12 13:33:12.747065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.366 [2024-07-12 13:33:12.814505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.625 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.625 INFO: Seed: 2897196007 00:07:24.625 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296d90c, 0x29c43f5), 00:07:24.625 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c43f8,0x2f2f288), 00:07:24.625 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:24.625 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.625 #2 INITED exec/s: 0 rss: 65Mb 00:07:24.625 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.625 This may also happen if the target rejected all inputs we tried so far 00:07:24.625 [2024-07-12 13:33:13.037057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:24.885 NEW_FUNC[1/658]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:24.885 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:24.885 #44 NEW cov: 10946 ft: 10913 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:25.146 NEW_FUNC[1/1]: 0x17d1c90 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1157 00:07:25.146 #60 NEW cov: 10963 ft: 13981 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:25.146 #61 NEW cov: 10966 ft: 14895 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:25.406 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:25.406 #62 NEW cov: 10983 ft: 15725 corp: 5/129b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:25.666 #63 NEW cov: 10983 ft: 15835 corp: 6/161b lim: 32 exec/s: 63 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:25.666 #64 NEW cov: 10983 ft: 16269 corp: 7/193b lim: 32 exec/s: 64 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:25.927 #65 NEW cov: 10983 ft: 16730 corp: 8/225b lim: 32 exec/s: 65 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:26.188 #66 NEW cov: 10983 ft: 17057 corp: 9/257b lim: 32 exec/s: 66 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:26.449 #67 NEW cov: 10983 ft: 17139 corp: 10/289b lim: 32 exec/s: 67 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:26.449 #68 NEW cov: 10990 ft: 17439 corp: 11/321b lim: 32 exec/s: 68 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:26.709 #69 NEW cov: 10990 ft: 17591 corp: 12/353b lim: 32 exec/s: 34 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:26.709 #69 DONE cov: 10990 ft: 17591 corp: 12/353b lim: 32 exec/s: 34 rss: 74Mb 00:07:26.709 Done 69 runs in 2 second(s) 00:07:26.709 [2024-07-12 13:33:15.165398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:26.970 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:26.970 13:33:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:26.970 [2024-07-12 13:33:15.396546] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:26.970 [2024-07-12 13:33:15.396654] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2453396 ] 00:07:26.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.970 [2024-07-12 13:33:15.463804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.970 [2024-07-12 13:33:15.531294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.231 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.231 INFO: Seed: 1325232520 00:07:27.231 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296d90c, 0x29c43f5), 00:07:27.231 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c43f8,0x2f2f288), 00:07:27.231 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:27.231 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.231 #2 INITED exec/s: 0 rss: 66Mb 00:07:27.231 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.231 This may also happen if the target rejected all inputs we tried so far 00:07:27.231 [2024-07-12 13:33:15.756467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:27.491 [2024-07-12 13:33:15.830414] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:27.491 [2024-07-12 13:33:15.830508] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:27.491 NEW_FUNC[1/659]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:27.491 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:27.491 #26 NEW cov: 10956 ft: 10888 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 ChangeBit-InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:27.751 [2024-07-12 13:33:16.157380] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:27.751 [2024-07-12 13:33:16.157475] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:27.751 NEW_FUNC[1/1]: 0x1726410 in nvme_pcie_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:829 00:07:27.751 #32 NEW cov: 10975 ft: 14080 corp: 3/27b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:28.011 [2024-07-12 13:33:16.336993] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.011 [2024-07-12 13:33:16.337081] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.011 #38 NEW cov: 10975 ft: 15198 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:07:28.011 [2024-07-12 13:33:16.514472] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.011 [2024-07-12 13:33:16.514562] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.271 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:28.271 #42 NEW cov: 10992 ft: 15452 corp: 5/53b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 4 CrossOver-EraseBytes-InsertByte-CopyPart- 00:07:28.271 [2024-07-12 13:33:16.698723] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.271 [2024-07-12 13:33:16.698813] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.271 #43 NEW cov: 10992 ft: 15563 corp: 6/66b lim: 13 exec/s: 43 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:28.531 [2024-07-12 13:33:16.867328] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.531 [2024-07-12 13:33:16.867457] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.531 #44 NEW cov: 10992 ft: 15737 corp: 7/79b lim: 13 exec/s: 44 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:28.531 [2024-07-12 13:33:17.047447] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.531 [2024-07-12 13:33:17.047536] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.790 #45 NEW cov: 10992 ft: 15745 corp: 8/92b lim: 13 exec/s: 45 rss: 75Mb L: 13/13 MS: 1 CrossOver- 00:07:28.790 [2024-07-12 13:33:17.230022] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:28.790 [2024-07-12 13:33:17.230110] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:28.790 #46 NEW cov: 10992 ft: 15753 corp: 9/105b lim: 13 exec/s: 46 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:29.050 [2024-07-12 13:33:17.397746] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:29.050 [2024-07-12 13:33:17.397835] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:29.050 #52 NEW cov: 10992 ft: 15827 corp: 10/118b lim: 13 exec/s: 52 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:29.050 [2024-07-12 13:33:17.580269] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:29.050 [2024-07-12 13:33:17.580310] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:29.310 #58 NEW cov: 10999 ft: 16594 corp: 11/131b lim: 13 exec/s: 58 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:29.310 [2024-07-12 13:33:17.764667] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:29.310 [2024-07-12 13:33:17.764756] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:29.310 #59 NEW cov: 10999 ft: 17189 corp: 12/144b lim: 13 exec/s: 29 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:07:29.310 #59 DONE cov: 10999 ft: 17189 corp: 12/144b lim: 13 exec/s: 29 rss: 75Mb 00:07:29.310 Done 59 runs in 2 second(s) 00:07:29.310 [2024-07-12 13:33:17.889413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:29.571 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:29.571 13:33:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:29.571 [2024-07-12 13:33:18.118343] Starting SPDK v24.09-pre git sha1 a49cd26ae / DPDK 24.03.0 initialization... 00:07:29.571 [2024-07-12 13:33:18.118422] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2453970 ] 00:07:29.571 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.832 [2024-07-12 13:33:18.185960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.832 [2024-07-12 13:33:18.253496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.832 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.832 INFO: Seed: 4038232832 00:07:30.092 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296d90c, 0x29c43f5), 00:07:30.092 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c43f8,0x2f2f288), 00:07:30.092 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:30.092 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.092 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.092 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.092 This may also happen if the target rejected all inputs we tried so far 00:07:30.092 [2024-07-12 13:33:18.470633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:30.092 [2024-07-12 13:33:18.547073] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.092 [2024-07-12 13:33:18.547258] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.352 NEW_FUNC[1/660]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:30.352 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:30.352 #13 NEW cov: 10944 ft: 10919 corp: 2/10b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:30.352 [2024-07-12 13:33:18.821548] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.352 [2024-07-12 13:33:18.821636] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.352 #39 NEW cov: 10964 ft: 14456 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:07:30.611 [2024-07-12 13:33:18.999059] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.611 [2024-07-12 13:33:18.999148] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.611 #44 NEW cov: 10967 ft: 14871 corp: 4/28b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 ChangeByte-CrossOver-InsertRepeatedBytes-CrossOver-InsertByte- 00:07:30.611 [2024-07-12 13:33:19.183435] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.611 [2024-07-12 13:33:19.183617] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.870 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:30.870 #50 NEW cov: 10984 ft: 15367 corp: 5/37b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:30.870 [2024-07-12 13:33:19.360947] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.870 [2024-07-12 13:33:19.361036] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.129 #56 NEW cov: 10984 ft: 15444 corp: 6/46b lim: 9 exec/s: 56 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:07:31.129 [2024-07-12 13:33:19.523274] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.129 [2024-07-12 13:33:19.523338] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.129 #68 NEW cov: 10984 ft: 15933 corp: 7/55b lim: 9 exec/s: 68 rss: 73Mb L: 9/9 MS: 2 ChangeByte-CrossOver- 00:07:31.129 [2024-07-12 13:33:19.682905] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.129 [2024-07-12 13:33:19.682993] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.388 #69 NEW cov: 10984 ft: 16228 corp: 8/64b lim: 9 exec/s: 69 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:31.388 [2024-07-12 13:33:19.814938] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.388 [2024-07-12 13:33:19.815030] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.388 #75 NEW cov: 10984 ft: 16611 corp: 9/73b lim: 9 exec/s: 75 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:31.388 [2024-07-12 13:33:19.948056] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.388 [2024-07-12 13:33:19.948150] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.647 #76 NEW cov: 10984 ft: 16914 corp: 10/82b lim: 9 exec/s: 76 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:31.647 [2024-07-12 13:33:20.080581] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.647 [2024-07-12 13:33:20.080715] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.647 #77 NEW cov: 10984 ft: 17053 corp: 11/91b lim: 9 exec/s: 77 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:07:31.647 [2024-07-12 13:33:20.212634] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.647 [2024-07-12 13:33:20.212722] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.906 #78 NEW cov: 10991 ft: 17163 corp: 12/100b lim: 9 exec/s: 78 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:07:31.906 [2024-07-12 13:33:20.345799] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.906 [2024-07-12 13:33:20.345886] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.906 #79 NEW cov: 10991 ft: 17189 corp: 13/109b lim: 9 exec/s: 79 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:07:31.906 [2024-07-12 13:33:20.477759] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.906 [2024-07-12 13:33:20.477848] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:32.165 #85 NEW cov: 10991 ft: 17229 corp: 14/118b lim: 9 exec/s: 42 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:32.165 #85 DONE cov: 10991 ft: 17229 corp: 14/118b lim: 9 exec/s: 42 rss: 73Mb 00:07:32.165 Done 85 runs in 2 second(s) 00:07:32.165 [2024-07-12 13:33:20.571405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:32.424 13:33:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:32.424 13:33:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.424 13:33:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.424 13:33:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:32.424 00:07:32.424 real 0m18.946s 00:07:32.424 user 0m28.099s 00:07:32.424 sys 0m1.565s 00:07:32.424 13:33:20 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.424 13:33:20 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:32.424 ************************************ 00:07:32.424 END TEST vfio_llvm_fuzz 00:07:32.424 ************************************ 00:07:32.424 13:33:20 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:32.424 13:33:20 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:32.424 00:07:32.424 real 1m21.839s 00:07:32.424 user 2m12.128s 00:07:32.424 sys 0m7.805s 00:07:32.424 13:33:20 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.424 13:33:20 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:32.424 ************************************ 00:07:32.424 END TEST llvm_fuzz 00:07:32.424 ************************************ 00:07:32.424 13:33:20 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.424 13:33:20 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:07:32.424 13:33:20 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:07:32.424 13:33:20 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:07:32.424 13:33:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.424 13:33:20 -- common/autotest_common.sh@10 -- # set +x 00:07:32.424 13:33:20 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:07:32.424 13:33:20 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:32.424 13:33:20 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:32.424 13:33:20 -- common/autotest_common.sh@10 -- # set +x 00:07:40.555 INFO: APP EXITING 00:07:40.555 INFO: killing all VMs 00:07:40.555 INFO: killing vhost app 00:07:40.555 INFO: EXIT DONE 00:07:43.094 Waiting for block devices as requested 00:07:43.094 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:43.094 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:43.354 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:43.354 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:43.354 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:43.354 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:43.613 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:43.613 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:43.613 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:07:43.873 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:07:43.873 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:07:43.873 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:07:44.132 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:07:44.133 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:07:44.133 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:07:44.133 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:07:44.393 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:07:48.595 Cleaning 00:07:48.595 Removing: /dev/shm/spdk_tgt_trace.pid2415157 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2414650 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2415157 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2415786 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2416820 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2417160 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2418231 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2418510 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2418720 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2419100 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2419489 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2419895 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2420282 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2420470 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2420679 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2421056 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2422208 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2425691 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2425908 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2426250 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2426439 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2426817 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2427146 00:07:48.595 Removing: /var/run/dpdk/spdk_pid2427524 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2427564 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2427902 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2428185 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2428278 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2428604 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2429040 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2429363 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2429494 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2429822 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2430173 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2430210 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2430279 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2430627 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2430977 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2431263 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2431447 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2431719 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2432072 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2432421 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2432731 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2432908 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2433160 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2433509 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2433864 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2434190 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2434366 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2434605 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2434954 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2435312 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2435665 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2435851 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2436070 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2436372 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2436674 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2437386 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2437750 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2438405 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2438764 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2439433 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2439788 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2440461 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2440816 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2441480 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2441839 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2442518 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2442883 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2443533 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2443907 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2444547 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2444937 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2445510 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2445951 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2446469 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2446976 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2447465 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2448000 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2448444 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2449022 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2449422 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2450220 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2450821 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2451248 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2452274 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2452754 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2453396 00:07:48.596 Removing: /var/run/dpdk/spdk_pid2453970 00:07:48.596 Clean 00:07:48.596 13:33:36 -- common/autotest_common.sh@1451 -- # return 0 00:07:48.596 13:33:36 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:07:48.596 13:33:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.596 13:33:36 -- common/autotest_common.sh@10 -- # set +x 00:07:48.596 13:33:36 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:07:48.596 13:33:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.596 13:33:36 -- common/autotest_common.sh@10 -- # set +x 00:07:48.596 13:33:36 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:48.596 13:33:36 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:48.596 13:33:36 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:48.596 13:33:36 -- spdk/autotest.sh@391 -- # hash lcov 00:07:48.596 13:33:36 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:07:48.596 13:33:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:48.596 13:33:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:48.596 13:33:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.596 13:33:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.596 13:33:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.596 13:33:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.596 13:33:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.596 13:33:37 -- paths/export.sh@5 -- $ export PATH 00:07:48.596 13:33:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.596 13:33:37 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:48.596 13:33:37 -- common/autobuild_common.sh@444 -- $ date +%s 00:07:48.596 13:33:37 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720784017.XXXXXX 00:07:48.596 13:33:37 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720784017.snlLvm 00:07:48.596 13:33:37 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:07:48.596 13:33:37 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:07:48.596 13:33:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:07:48.596 13:33:37 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:48.596 13:33:37 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:48.596 13:33:37 -- common/autobuild_common.sh@460 -- $ get_config_params 00:07:48.596 13:33:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:07:48.596 13:33:37 -- common/autotest_common.sh@10 -- $ set +x 00:07:48.596 13:33:37 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:48.596 13:33:37 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:07:48.596 13:33:37 -- pm/common@17 -- $ local monitor 00:07:48.596 13:33:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.596 13:33:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.596 13:33:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.596 13:33:37 -- pm/common@21 -- $ date +%s 00:07:48.596 13:33:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.596 13:33:37 -- pm/common@25 -- $ sleep 1 00:07:48.596 13:33:37 -- pm/common@21 -- $ date +%s 00:07:48.596 13:33:37 -- pm/common@21 -- $ date +%s 00:07:48.596 13:33:37 -- pm/common@21 -- $ date +%s 00:07:48.596 13:33:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784017 00:07:48.596 13:33:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784017 00:07:48.596 13:33:37 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784017 00:07:48.596 13:33:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720784017 00:07:48.596 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784017_collect-vmstat.pm.log 00:07:48.596 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784017_collect-cpu-temp.pm.log 00:07:48.596 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784017_collect-cpu-load.pm.log 00:07:48.857 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720784017_collect-bmc-pm.bmc.pm.log 00:07:49.798 13:33:38 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:07:49.798 13:33:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:07:49.798 13:33:38 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:49.798 13:33:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:07:49.798 13:33:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:07:49.798 13:33:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:07:49.798 13:33:38 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:07:49.798 13:33:38 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:07:49.798 13:33:38 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:49.798 13:33:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:07:49.798 13:33:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:07:49.798 13:33:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:49.798 13:33:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:49.798 13:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.798 13:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:49.798 13:33:38 -- pm/common@44 -- $ pid=2462687 00:07:49.798 13:33:38 -- pm/common@50 -- $ kill -TERM 2462687 00:07:49.798 13:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.798 13:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:49.798 13:33:38 -- pm/common@44 -- $ pid=2462688 00:07:49.798 13:33:38 -- pm/common@50 -- $ kill -TERM 2462688 00:07:49.798 13:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.798 13:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:49.798 13:33:38 -- pm/common@44 -- $ pid=2462691 00:07:49.798 13:33:38 -- pm/common@50 -- $ kill -TERM 2462691 00:07:49.798 13:33:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:49.798 13:33:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:49.798 13:33:38 -- pm/common@44 -- $ pid=2462745 00:07:49.798 13:33:38 -- pm/common@50 -- $ sudo -E kill -TERM 2462745 00:07:49.798 + [[ -n 2294399 ]] 00:07:49.798 + sudo kill 2294399 00:07:49.807 [Pipeline] } 00:07:49.822 [Pipeline] // stage 00:07:49.828 [Pipeline] } 00:07:49.845 [Pipeline] // timeout 00:07:49.850 [Pipeline] } 00:07:49.867 [Pipeline] // catchError 00:07:49.873 [Pipeline] } 00:07:49.891 [Pipeline] // wrap 00:07:49.897 [Pipeline] } 00:07:49.913 [Pipeline] // catchError 00:07:49.922 [Pipeline] stage 00:07:49.924 [Pipeline] { (Epilogue) 00:07:49.941 [Pipeline] catchError 00:07:49.943 [Pipeline] { 00:07:49.958 [Pipeline] echo 00:07:49.959 Cleanup processes 00:07:49.966 [Pipeline] sh 00:07:50.253 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.253 2463002 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:07:50.253 2463866 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.269 [Pipeline] sh 00:07:50.555 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:50.555 ++ grep -v 'sudo pgrep' 00:07:50.555 ++ awk '{print $1}' 00:07:50.555 + sudo kill -9 2463002 00:07:50.568 [Pipeline] sh 00:07:50.854 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:07:53.413 [Pipeline] sh 00:07:53.723 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:07:53.723 Artifacts sizes are good 00:07:53.768 [Pipeline] archiveArtifacts 00:07:53.776 Archiving artifacts 00:07:53.845 [Pipeline] sh 00:07:54.138 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:54.154 [Pipeline] cleanWs 00:07:54.164 [WS-CLEANUP] Deleting project workspace... 00:07:54.164 [WS-CLEANUP] Deferred wipeout is used... 00:07:54.172 [WS-CLEANUP] done 00:07:54.173 [Pipeline] } 00:07:54.194 [Pipeline] // catchError 00:07:54.207 [Pipeline] sh 00:07:54.493 + logger -p user.info -t JENKINS-CI 00:07:54.504 [Pipeline] } 00:07:54.519 [Pipeline] // stage 00:07:54.524 [Pipeline] } 00:07:54.545 [Pipeline] // node 00:07:54.554 [Pipeline] End of Pipeline 00:07:54.588 Finished: SUCCESS