00:00:00.001 Started by upstream project "autotest-per-patch" build number 126123 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.069 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.092 Using shallow fetch with depth 1 00:00:00.092 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.092 > git --version # timeout=10 00:00:00.124 > git --version # 'git version 2.39.2' 00:00:00.124 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.160 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.160 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.606 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.617 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.628 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.628 > git config core.sparsecheckout # timeout=10 00:00:03.642 > git read-tree -mu HEAD # timeout=10 00:00:03.658 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.676 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.676 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.760 [Pipeline] Start of Pipeline 00:00:03.773 [Pipeline] library 00:00:03.774 Loading library shm_lib@master 00:00:03.774 Library shm_lib@master is cached. Copying from home. 00:00:03.791 [Pipeline] node 00:00:03.807 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.809 [Pipeline] { 00:00:03.819 [Pipeline] catchError 00:00:03.821 [Pipeline] { 00:00:03.833 [Pipeline] wrap 00:00:03.842 [Pipeline] { 00:00:03.849 [Pipeline] stage 00:00:03.851 [Pipeline] { (Prologue) 00:00:04.057 [Pipeline] sh 00:00:04.337 + logger -p user.info -t JENKINS-CI 00:00:04.352 [Pipeline] echo 00:00:04.353 Node: WFP39 00:00:04.358 [Pipeline] sh 00:00:04.649 [Pipeline] setCustomBuildProperty 00:00:04.659 [Pipeline] echo 00:00:04.660 Cleanup processes 00:00:04.664 [Pipeline] sh 00:00:04.941 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.942 1297583 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.953 [Pipeline] sh 00:00:05.232 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.232 ++ grep -v 'sudo pgrep' 00:00:05.232 ++ awk '{print $1}' 00:00:05.232 + sudo kill -9 00:00:05.232 + true 00:00:05.244 [Pipeline] cleanWs 00:00:05.253 [WS-CLEANUP] Deleting project workspace... 00:00:05.253 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.257 [WS-CLEANUP] done 00:00:05.260 [Pipeline] setCustomBuildProperty 00:00:05.272 [Pipeline] sh 00:00:05.550 + sudo git config --global --replace-all safe.directory '*' 00:00:05.614 [Pipeline] httpRequest 00:00:05.631 [Pipeline] echo 00:00:05.632 Sorcerer 10.211.164.101 is alive 00:00:05.638 [Pipeline] httpRequest 00:00:05.642 HttpMethod: GET 00:00:05.642 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.643 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.647 Response Code: HTTP/1.1 200 OK 00:00:05.647 Success: Status code 200 is in the accepted range: 200,404 00:00:05.647 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.382 [Pipeline] sh 00:00:06.678 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.692 [Pipeline] httpRequest 00:00:06.733 [Pipeline] echo 00:00:06.736 Sorcerer 10.211.164.101 is alive 00:00:06.745 [Pipeline] httpRequest 00:00:06.748 HttpMethod: GET 00:00:06.749 URL: http://10.211.164.101/packages/spdk_2a2ade677c5da4058114c61960dae9bc40fa01d7.tar.gz 00:00:06.750 Sending request to url: http://10.211.164.101/packages/spdk_2a2ade677c5da4058114c61960dae9bc40fa01d7.tar.gz 00:00:06.752 Response Code: HTTP/1.1 200 OK 00:00:06.752 Success: Status code 200 is in the accepted range: 200,404 00:00:06.753 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_2a2ade677c5da4058114c61960dae9bc40fa01d7.tar.gz 00:00:21.916 [Pipeline] sh 00:00:22.202 + tar --no-same-owner -xf spdk_2a2ade677c5da4058114c61960dae9bc40fa01d7.tar.gz 00:00:24.748 [Pipeline] sh 00:00:25.062 + git -C spdk log --oneline -n5 00:00:25.062 2a2ade677 test/nvmf/digest: parametrize digest tests for DSA kernel mode 00:00:25.062 07d3b03c8 test/accel: parametrize accel tests for DSA kernel mode 00:00:25.062 192cfc373 test/common/autotest_common: managing idxd drivers setup 00:00:25.062 e118fc0cd test/setup: add configuration script for dsa devices 00:00:25.062 719d03c6a sock/uring: only register net impl if supported 00:00:25.073 [Pipeline] } 00:00:25.089 [Pipeline] // stage 00:00:25.097 [Pipeline] stage 00:00:25.099 [Pipeline] { (Prepare) 00:00:25.118 [Pipeline] writeFile 00:00:25.135 [Pipeline] sh 00:00:25.417 + logger -p user.info -t JENKINS-CI 00:00:25.429 [Pipeline] sh 00:00:25.713 + logger -p user.info -t JENKINS-CI 00:00:25.724 [Pipeline] sh 00:00:26.006 + cat autorun-spdk.conf 00:00:26.006 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.006 SPDK_TEST_FUZZER_SHORT=1 00:00:26.006 SPDK_TEST_FUZZER=1 00:00:26.006 SPDK_RUN_UBSAN=1 00:00:26.012 RUN_NIGHTLY=0 00:00:26.018 [Pipeline] readFile 00:00:26.044 [Pipeline] withEnv 00:00:26.045 [Pipeline] { 00:00:26.056 [Pipeline] sh 00:00:26.348 + set -ex 00:00:26.348 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:26.348 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:26.348 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.348 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:26.348 ++ SPDK_TEST_FUZZER=1 00:00:26.348 ++ SPDK_RUN_UBSAN=1 00:00:26.348 ++ RUN_NIGHTLY=0 00:00:26.348 + case $SPDK_TEST_NVMF_NICS in 00:00:26.348 + DRIVERS= 00:00:26.348 + [[ -n '' ]] 00:00:26.348 + exit 0 00:00:26.361 [Pipeline] } 00:00:26.375 [Pipeline] // withEnv 00:00:26.379 [Pipeline] } 00:00:26.392 [Pipeline] // stage 00:00:26.399 [Pipeline] catchError 00:00:26.401 [Pipeline] { 00:00:26.414 [Pipeline] timeout 00:00:26.414 Timeout set to expire in 30 min 00:00:26.415 [Pipeline] { 00:00:26.427 [Pipeline] stage 00:00:26.429 [Pipeline] { (Tests) 00:00:26.439 [Pipeline] sh 00:00:26.716 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.716 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.716 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.716 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:26.716 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:26.716 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:26.716 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:26.716 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:26.716 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:26.716 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:26.716 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:26.716 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.716 + source /etc/os-release 00:00:26.716 ++ NAME='Fedora Linux' 00:00:26.716 ++ VERSION='38 (Cloud Edition)' 00:00:26.716 ++ ID=fedora 00:00:26.716 ++ VERSION_ID=38 00:00:26.716 ++ VERSION_CODENAME= 00:00:26.716 ++ PLATFORM_ID=platform:f38 00:00:26.716 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:26.716 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:26.716 ++ LOGO=fedora-logo-icon 00:00:26.716 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:26.716 ++ HOME_URL=https://fedoraproject.org/ 00:00:26.716 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:26.716 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:26.716 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:26.716 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:26.716 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:26.716 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:26.716 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:26.716 ++ SUPPORT_END=2024-05-14 00:00:26.716 ++ VARIANT='Cloud Edition' 00:00:26.716 ++ VARIANT_ID=cloud 00:00:26.716 + uname -a 00:00:26.716 Linux spdk-wfp-39 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:00:26.716 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:30.011 Hugepages 00:00:30.011 node hugesize free / total 00:00:30.011 node0 1048576kB 0 / 0 00:00:30.011 node0 2048kB 0 / 0 00:00:30.011 node1 1048576kB 0 / 0 00:00:30.011 node1 2048kB 0 / 0 00:00:30.011 00:00:30.011 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:30.011 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:30.011 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:30.011 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:30.011 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:30.011 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:30.011 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:30.011 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:30.011 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:30.011 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:30.011 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:30.011 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:30.011 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:30.011 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:30.011 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:30.011 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:30.011 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:30.011 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:30.011 + rm -f /tmp/spdk-ld-path 00:00:30.011 + source autorun-spdk.conf 00:00:30.011 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.011 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:30.011 ++ SPDK_TEST_FUZZER=1 00:00:30.011 ++ SPDK_RUN_UBSAN=1 00:00:30.011 ++ RUN_NIGHTLY=0 00:00:30.011 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:30.011 + [[ -n '' ]] 00:00:30.011 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:30.011 + for M in /var/spdk/build-*-manifest.txt 00:00:30.011 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:30.011 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:30.011 + for M in /var/spdk/build-*-manifest.txt 00:00:30.011 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:30.011 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:30.011 ++ uname 00:00:30.011 + [[ Linux == \L\i\n\u\x ]] 00:00:30.011 + sudo dmesg -T 00:00:30.271 + sudo dmesg --clear 00:00:30.271 + dmesg_pid=1298527 00:00:30.271 + [[ Fedora Linux == FreeBSD ]] 00:00:30.271 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:30.271 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:30.271 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:30.271 + [[ -x /usr/src/fio-static/fio ]] 00:00:30.271 + sudo dmesg -Tw 00:00:30.271 + export FIO_BIN=/usr/src/fio-static/fio 00:00:30.271 + FIO_BIN=/usr/src/fio-static/fio 00:00:30.271 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:30.271 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:30.271 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:30.271 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:30.271 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:30.271 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:30.271 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:30.271 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:30.271 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:30.271 Test configuration: 00:00:30.271 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.271 SPDK_TEST_FUZZER_SHORT=1 00:00:30.271 SPDK_TEST_FUZZER=1 00:00:30.271 SPDK_RUN_UBSAN=1 00:00:30.271 RUN_NIGHTLY=0 14:29:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:30.271 14:29:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:30.271 14:29:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:30.271 14:29:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:30.271 14:29:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.271 14:29:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.271 14:29:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.271 14:29:06 -- paths/export.sh@5 -- $ export PATH 00:00:30.271 14:29:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:30.271 14:29:06 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:30.271 14:29:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:30.271 14:29:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720787346.XXXXXX 00:00:30.271 14:29:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720787346.2SJQYK 00:00:30.271 14:29:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:30.271 14:29:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:30.271 14:29:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:30.271 14:29:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:30.271 14:29:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:30.271 14:29:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:30.271 14:29:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:30.271 14:29:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:30.271 14:29:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:30.271 14:29:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:30.271 14:29:06 -- pm/common@17 -- $ local monitor 00:00:30.271 14:29:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.271 14:29:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.271 14:29:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.271 14:29:06 -- pm/common@21 -- $ date +%s 00:00:30.271 14:29:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:30.271 14:29:06 -- pm/common@21 -- $ date +%s 00:00:30.271 14:29:06 -- pm/common@25 -- $ sleep 1 00:00:30.271 14:29:06 -- pm/common@21 -- $ date +%s 00:00:30.271 14:29:07 -- pm/common@21 -- $ date +%s 00:00:30.271 14:29:07 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720787347 00:00:30.271 14:29:07 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720787347 00:00:30.271 14:29:07 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720787347 00:00:30.271 14:29:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720787347 00:00:30.271 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720787347_collect-vmstat.pm.log 00:00:30.271 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720787347_collect-cpu-load.pm.log 00:00:30.271 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720787347_collect-cpu-temp.pm.log 00:00:30.531 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720787347_collect-bmc-pm.bmc.pm.log 00:00:31.470 14:29:08 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:31.470 14:29:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:31.470 14:29:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:31.470 14:29:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:31.470 14:29:08 -- spdk/autobuild.sh@16 -- $ date -u 00:00:31.470 Fri Jul 12 12:29:08 PM UTC 2024 00:00:31.470 14:29:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:31.470 v24.09-pre-206-g2a2ade677 00:00:31.470 14:29:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:31.470 14:29:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:31.470 14:29:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:31.470 14:29:08 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:31.470 14:29:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:31.470 14:29:08 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.470 ************************************ 00:00:31.470 START TEST ubsan 00:00:31.470 ************************************ 00:00:31.470 14:29:08 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:31.470 using ubsan 00:00:31.470 00:00:31.470 real 0m0.001s 00:00:31.470 user 0m0.000s 00:00:31.470 sys 0m0.001s 00:00:31.470 14:29:08 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:31.470 14:29:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:31.470 ************************************ 00:00:31.470 END TEST ubsan 00:00:31.470 ************************************ 00:00:31.470 14:29:08 -- common/autotest_common.sh@1142 -- $ return 0 00:00:31.470 14:29:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:31.470 14:29:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:31.470 14:29:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:31.470 14:29:08 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:31.470 14:29:08 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:31.470 14:29:08 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:31.470 14:29:08 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:31.470 14:29:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:31.470 14:29:08 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.470 ************************************ 00:00:31.470 START TEST autobuild_llvm_precompile 00:00:31.470 ************************************ 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:31.470 Target: x86_64-redhat-linux-gnu 00:00:31.470 Thread model: posix 00:00:31.470 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:31.470 14:29:08 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:31.730 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:31.730 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:32.299 Using 'verbs' RDMA provider 00:00:48.121 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:03.009 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:03.009 Creating mk/config.mk...done. 00:01:03.009 Creating mk/cc.flags.mk...done. 00:01:03.009 Type 'make' to build. 00:01:03.009 00:01:03.009 real 0m30.331s 00:01:03.009 user 0m12.995s 00:01:03.009 sys 0m16.792s 00:01:03.009 14:29:38 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:03.009 14:29:38 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:03.009 ************************************ 00:01:03.009 END TEST autobuild_llvm_precompile 00:01:03.009 ************************************ 00:01:03.009 14:29:38 -- common/autotest_common.sh@1142 -- $ return 0 00:01:03.009 14:29:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:03.009 14:29:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:03.009 14:29:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:03.009 14:29:38 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:03.009 14:29:38 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:03.009 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:03.009 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:03.009 Using 'verbs' RDMA provider 00:01:15.788 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.008 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:28.008 Creating mk/config.mk...done. 00:01:28.008 Creating mk/cc.flags.mk...done. 00:01:28.008 Type 'make' to build. 00:01:28.008 14:30:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:28.008 14:30:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.008 14:30:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.008 14:30:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.008 ************************************ 00:01:28.008 START TEST make 00:01:28.008 ************************************ 00:01:28.008 14:30:04 make -- common/autotest_common.sh@1123 -- $ make -j72 00:01:28.575 make[1]: Nothing to be done for 'all'. 00:01:29.963 The Meson build system 00:01:29.963 Version: 1.3.1 00:01:29.963 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:29.963 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.963 Build type: native build 00:01:29.963 Project name: libvfio-user 00:01:29.963 Project version: 0.0.1 00:01:29.963 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:29.963 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:29.963 Host machine cpu family: x86_64 00:01:29.963 Host machine cpu: x86_64 00:01:29.963 Run-time dependency threads found: YES 00:01:29.963 Library dl found: YES 00:01:29.963 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.963 Run-time dependency json-c found: YES 0.17 00:01:29.963 Run-time dependency cmocka found: YES 1.1.7 00:01:29.963 Program pytest-3 found: NO 00:01:29.963 Program flake8 found: NO 00:01:29.963 Program misspell-fixer found: NO 00:01:29.963 Program restructuredtext-lint found: NO 00:01:29.963 Program valgrind found: YES (/usr/bin/valgrind) 00:01:29.963 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.963 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.963 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.963 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.963 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:29.963 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:29.963 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.963 Build targets in project: 8 00:01:29.963 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:29.963 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:29.963 00:01:29.963 libvfio-user 0.0.1 00:01:29.963 00:01:29.963 User defined options 00:01:29.963 buildtype : debug 00:01:29.963 default_library: static 00:01:29.963 libdir : /usr/local/lib 00:01:29.963 00:01:29.963 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.531 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.531 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:30.531 [2/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:30.531 [3/36] Compiling C object samples/null.p/null.c.o 00:01:30.531 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:30.531 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:30.531 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:30.531 [7/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:30.531 [8/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:30.531 [9/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:30.531 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:30.531 [11/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:30.531 [12/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:30.531 [13/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:30.531 [14/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:30.531 [15/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:30.531 [16/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:30.531 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:30.531 [18/36] Compiling C object samples/server.p/server.c.o 00:01:30.531 [19/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:30.531 [20/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:30.531 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:30.531 [22/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:30.531 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:30.531 [24/36] Compiling C object samples/client.p/client.c.o 00:01:30.531 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:30.531 [26/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:30.531 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:30.839 [28/36] Linking target samples/client 00:01:30.839 [29/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:30.839 [30/36] Linking target test/unit_tests 00:01:30.839 [31/36] Linking static target lib/libvfio-user.a 00:01:30.839 [32/36] Linking target samples/gpio-pci-idio-16 00:01:30.839 [33/36] Linking target samples/shadow_ioeventfd_server 00:01:30.839 [34/36] Linking target samples/null 00:01:30.839 [35/36] Linking target samples/server 00:01:30.839 [36/36] Linking target samples/lspci 00:01:30.839 INFO: autodetecting backend as ninja 00:01:30.839 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.839 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.106 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.107 ninja: no work to do. 00:01:37.676 The Meson build system 00:01:37.676 Version: 1.3.1 00:01:37.676 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:37.676 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:37.676 Build type: native build 00:01:37.676 Program cat found: YES (/usr/bin/cat) 00:01:37.676 Project name: DPDK 00:01:37.676 Project version: 24.03.0 00:01:37.676 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:37.676 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:37.676 Host machine cpu family: x86_64 00:01:37.676 Host machine cpu: x86_64 00:01:37.676 Message: ## Building in Developer Mode ## 00:01:37.676 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.676 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:37.676 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.676 Program python3 found: YES (/usr/bin/python3) 00:01:37.676 Program cat found: YES (/usr/bin/cat) 00:01:37.676 Compiler for C supports arguments -march=native: YES 00:01:37.676 Checking for size of "void *" : 8 00:01:37.676 Checking for size of "void *" : 8 (cached) 00:01:37.676 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:37.676 Library m found: YES 00:01:37.676 Library numa found: YES 00:01:37.676 Has header "numaif.h" : YES 00:01:37.676 Library fdt found: NO 00:01:37.676 Library execinfo found: NO 00:01:37.676 Has header "execinfo.h" : YES 00:01:37.676 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.676 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.676 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.676 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.676 Run-time dependency openssl found: YES 3.0.9 00:01:37.676 Run-time dependency libpcap found: YES 1.10.4 00:01:37.676 Has header "pcap.h" with dependency libpcap: YES 00:01:37.676 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.676 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.676 Compiler for C supports arguments -Wformat: YES 00:01:37.676 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:37.676 Compiler for C supports arguments -Wformat-security: YES 00:01:37.676 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.676 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.676 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.676 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.676 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.676 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.676 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.676 Compiler for C supports arguments -Wundef: YES 00:01:37.676 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.676 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.676 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:37.676 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.676 Program objdump found: YES (/usr/bin/objdump) 00:01:37.676 Compiler for C supports arguments -mavx512f: YES 00:01:37.676 Checking if "AVX512 checking" compiles: YES 00:01:37.676 Fetching value of define "__SSE4_2__" : 1 00:01:37.676 Fetching value of define "__AES__" : 1 00:01:37.676 Fetching value of define "__AVX__" : 1 00:01:37.676 Fetching value of define "__AVX2__" : 1 00:01:37.676 Fetching value of define "__AVX512BW__" : 1 00:01:37.676 Fetching value of define "__AVX512CD__" : 1 00:01:37.676 Fetching value of define "__AVX512DQ__" : 1 00:01:37.676 Fetching value of define "__AVX512F__" : 1 00:01:37.676 Fetching value of define "__AVX512VL__" : 1 00:01:37.676 Fetching value of define "__PCLMUL__" : 1 00:01:37.676 Fetching value of define "__RDRND__" : 1 00:01:37.676 Fetching value of define "__RDSEED__" : 1 00:01:37.676 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:37.676 Fetching value of define "__znver1__" : (undefined) 00:01:37.676 Fetching value of define "__znver2__" : (undefined) 00:01:37.676 Fetching value of define "__znver3__" : (undefined) 00:01:37.676 Fetching value of define "__znver4__" : (undefined) 00:01:37.676 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:37.676 Message: lib/log: Defining dependency "log" 00:01:37.676 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.676 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.676 Checking for function "getentropy" : NO 00:01:37.676 Message: lib/eal: Defining dependency "eal" 00:01:37.676 Message: lib/ring: Defining dependency "ring" 00:01:37.676 Message: lib/rcu: Defining dependency "rcu" 00:01:37.676 Message: lib/mempool: Defining dependency "mempool" 00:01:37.676 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.676 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.676 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:37.676 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:37.676 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:37.676 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:37.676 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:37.676 Compiler for C supports arguments -mpclmul: YES 00:01:37.676 Compiler for C supports arguments -maes: YES 00:01:37.676 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.676 Compiler for C supports arguments -mavx512bw: YES 00:01:37.676 Compiler for C supports arguments -mavx512dq: YES 00:01:37.676 Compiler for C supports arguments -mavx512vl: YES 00:01:37.676 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.676 Compiler for C supports arguments -mavx2: YES 00:01:37.676 Compiler for C supports arguments -mavx: YES 00:01:37.676 Message: lib/net: Defining dependency "net" 00:01:37.676 Message: lib/meter: Defining dependency "meter" 00:01:37.676 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.676 Message: lib/pci: Defining dependency "pci" 00:01:37.676 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.676 Message: lib/hash: Defining dependency "hash" 00:01:37.676 Message: lib/timer: Defining dependency "timer" 00:01:37.676 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.676 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.676 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.676 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.676 Message: lib/power: Defining dependency "power" 00:01:37.676 Message: lib/reorder: Defining dependency "reorder" 00:01:37.676 Message: lib/security: Defining dependency "security" 00:01:37.676 Has header "linux/userfaultfd.h" : YES 00:01:37.676 Has header "linux/vduse.h" : YES 00:01:37.676 Message: lib/vhost: Defining dependency "vhost" 00:01:37.676 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:37.676 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.676 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.676 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.676 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.676 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.676 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.676 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.676 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.676 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.676 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.676 Configuring doxy-api-html.conf using configuration 00:01:37.676 Configuring doxy-api-man.conf using configuration 00:01:37.676 Program mandb found: YES (/usr/bin/mandb) 00:01:37.676 Program sphinx-build found: NO 00:01:37.676 Configuring rte_build_config.h using configuration 00:01:37.676 Message: 00:01:37.676 ================= 00:01:37.676 Applications Enabled 00:01:37.676 ================= 00:01:37.676 00:01:37.676 apps: 00:01:37.676 00:01:37.676 00:01:37.676 Message: 00:01:37.676 ================= 00:01:37.676 Libraries Enabled 00:01:37.676 ================= 00:01:37.676 00:01:37.676 libs: 00:01:37.676 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.676 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.676 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.676 00:01:37.676 Message: 00:01:37.676 =============== 00:01:37.676 Drivers Enabled 00:01:37.676 =============== 00:01:37.676 00:01:37.676 common: 00:01:37.676 00:01:37.676 bus: 00:01:37.676 pci, vdev, 00:01:37.676 mempool: 00:01:37.676 ring, 00:01:37.676 dma: 00:01:37.676 00:01:37.676 net: 00:01:37.676 00:01:37.676 crypto: 00:01:37.676 00:01:37.676 compress: 00:01:37.676 00:01:37.676 vdpa: 00:01:37.676 00:01:37.676 00:01:37.676 Message: 00:01:37.676 ================= 00:01:37.676 Content Skipped 00:01:37.676 ================= 00:01:37.676 00:01:37.676 apps: 00:01:37.676 dumpcap: explicitly disabled via build config 00:01:37.676 graph: explicitly disabled via build config 00:01:37.676 pdump: explicitly disabled via build config 00:01:37.676 proc-info: explicitly disabled via build config 00:01:37.677 test-acl: explicitly disabled via build config 00:01:37.677 test-bbdev: explicitly disabled via build config 00:01:37.677 test-cmdline: explicitly disabled via build config 00:01:37.677 test-compress-perf: explicitly disabled via build config 00:01:37.677 test-crypto-perf: explicitly disabled via build config 00:01:37.677 test-dma-perf: explicitly disabled via build config 00:01:37.677 test-eventdev: explicitly disabled via build config 00:01:37.677 test-fib: explicitly disabled via build config 00:01:37.677 test-flow-perf: explicitly disabled via build config 00:01:37.677 test-gpudev: explicitly disabled via build config 00:01:37.677 test-mldev: explicitly disabled via build config 00:01:37.677 test-pipeline: explicitly disabled via build config 00:01:37.677 test-pmd: explicitly disabled via build config 00:01:37.677 test-regex: explicitly disabled via build config 00:01:37.677 test-sad: explicitly disabled via build config 00:01:37.677 test-security-perf: explicitly disabled via build config 00:01:37.677 00:01:37.677 libs: 00:01:37.677 argparse: explicitly disabled via build config 00:01:37.677 metrics: explicitly disabled via build config 00:01:37.677 acl: explicitly disabled via build config 00:01:37.677 bbdev: explicitly disabled via build config 00:01:37.677 bitratestats: explicitly disabled via build config 00:01:37.677 bpf: explicitly disabled via build config 00:01:37.677 cfgfile: explicitly disabled via build config 00:01:37.677 distributor: explicitly disabled via build config 00:01:37.677 efd: explicitly disabled via build config 00:01:37.677 eventdev: explicitly disabled via build config 00:01:37.677 dispatcher: explicitly disabled via build config 00:01:37.677 gpudev: explicitly disabled via build config 00:01:37.677 gro: explicitly disabled via build config 00:01:37.677 gso: explicitly disabled via build config 00:01:37.677 ip_frag: explicitly disabled via build config 00:01:37.677 jobstats: explicitly disabled via build config 00:01:37.677 latencystats: explicitly disabled via build config 00:01:37.677 lpm: explicitly disabled via build config 00:01:37.677 member: explicitly disabled via build config 00:01:37.677 pcapng: explicitly disabled via build config 00:01:37.677 rawdev: explicitly disabled via build config 00:01:37.677 regexdev: explicitly disabled via build config 00:01:37.677 mldev: explicitly disabled via build config 00:01:37.677 rib: explicitly disabled via build config 00:01:37.677 sched: explicitly disabled via build config 00:01:37.677 stack: explicitly disabled via build config 00:01:37.677 ipsec: explicitly disabled via build config 00:01:37.677 pdcp: explicitly disabled via build config 00:01:37.677 fib: explicitly disabled via build config 00:01:37.677 port: explicitly disabled via build config 00:01:37.677 pdump: explicitly disabled via build config 00:01:37.677 table: explicitly disabled via build config 00:01:37.677 pipeline: explicitly disabled via build config 00:01:37.677 graph: explicitly disabled via build config 00:01:37.677 node: explicitly disabled via build config 00:01:37.677 00:01:37.677 drivers: 00:01:37.677 common/cpt: not in enabled drivers build config 00:01:37.677 common/dpaax: not in enabled drivers build config 00:01:37.677 common/iavf: not in enabled drivers build config 00:01:37.677 common/idpf: not in enabled drivers build config 00:01:37.677 common/ionic: not in enabled drivers build config 00:01:37.677 common/mvep: not in enabled drivers build config 00:01:37.677 common/octeontx: not in enabled drivers build config 00:01:37.677 bus/auxiliary: not in enabled drivers build config 00:01:37.677 bus/cdx: not in enabled drivers build config 00:01:37.677 bus/dpaa: not in enabled drivers build config 00:01:37.677 bus/fslmc: not in enabled drivers build config 00:01:37.677 bus/ifpga: not in enabled drivers build config 00:01:37.677 bus/platform: not in enabled drivers build config 00:01:37.677 bus/uacce: not in enabled drivers build config 00:01:37.677 bus/vmbus: not in enabled drivers build config 00:01:37.677 common/cnxk: not in enabled drivers build config 00:01:37.677 common/mlx5: not in enabled drivers build config 00:01:37.677 common/nfp: not in enabled drivers build config 00:01:37.677 common/nitrox: not in enabled drivers build config 00:01:37.677 common/qat: not in enabled drivers build config 00:01:37.677 common/sfc_efx: not in enabled drivers build config 00:01:37.677 mempool/bucket: not in enabled drivers build config 00:01:37.677 mempool/cnxk: not in enabled drivers build config 00:01:37.677 mempool/dpaa: not in enabled drivers build config 00:01:37.677 mempool/dpaa2: not in enabled drivers build config 00:01:37.677 mempool/octeontx: not in enabled drivers build config 00:01:37.677 mempool/stack: not in enabled drivers build config 00:01:37.677 dma/cnxk: not in enabled drivers build config 00:01:37.677 dma/dpaa: not in enabled drivers build config 00:01:37.677 dma/dpaa2: not in enabled drivers build config 00:01:37.677 dma/hisilicon: not in enabled drivers build config 00:01:37.677 dma/idxd: not in enabled drivers build config 00:01:37.677 dma/ioat: not in enabled drivers build config 00:01:37.677 dma/skeleton: not in enabled drivers build config 00:01:37.677 net/af_packet: not in enabled drivers build config 00:01:37.677 net/af_xdp: not in enabled drivers build config 00:01:37.677 net/ark: not in enabled drivers build config 00:01:37.677 net/atlantic: not in enabled drivers build config 00:01:37.677 net/avp: not in enabled drivers build config 00:01:37.677 net/axgbe: not in enabled drivers build config 00:01:37.677 net/bnx2x: not in enabled drivers build config 00:01:37.677 net/bnxt: not in enabled drivers build config 00:01:37.677 net/bonding: not in enabled drivers build config 00:01:37.677 net/cnxk: not in enabled drivers build config 00:01:37.677 net/cpfl: not in enabled drivers build config 00:01:37.677 net/cxgbe: not in enabled drivers build config 00:01:37.677 net/dpaa: not in enabled drivers build config 00:01:37.677 net/dpaa2: not in enabled drivers build config 00:01:37.677 net/e1000: not in enabled drivers build config 00:01:37.677 net/ena: not in enabled drivers build config 00:01:37.677 net/enetc: not in enabled drivers build config 00:01:37.677 net/enetfec: not in enabled drivers build config 00:01:37.677 net/enic: not in enabled drivers build config 00:01:37.677 net/failsafe: not in enabled drivers build config 00:01:37.677 net/fm10k: not in enabled drivers build config 00:01:37.677 net/gve: not in enabled drivers build config 00:01:37.677 net/hinic: not in enabled drivers build config 00:01:37.677 net/hns3: not in enabled drivers build config 00:01:37.677 net/i40e: not in enabled drivers build config 00:01:37.677 net/iavf: not in enabled drivers build config 00:01:37.677 net/ice: not in enabled drivers build config 00:01:37.677 net/idpf: not in enabled drivers build config 00:01:37.677 net/igc: not in enabled drivers build config 00:01:37.677 net/ionic: not in enabled drivers build config 00:01:37.677 net/ipn3ke: not in enabled drivers build config 00:01:37.677 net/ixgbe: not in enabled drivers build config 00:01:37.677 net/mana: not in enabled drivers build config 00:01:37.677 net/memif: not in enabled drivers build config 00:01:37.677 net/mlx4: not in enabled drivers build config 00:01:37.677 net/mlx5: not in enabled drivers build config 00:01:37.677 net/mvneta: not in enabled drivers build config 00:01:37.677 net/mvpp2: not in enabled drivers build config 00:01:37.677 net/netvsc: not in enabled drivers build config 00:01:37.677 net/nfb: not in enabled drivers build config 00:01:37.677 net/nfp: not in enabled drivers build config 00:01:37.677 net/ngbe: not in enabled drivers build config 00:01:37.677 net/null: not in enabled drivers build config 00:01:37.677 net/octeontx: not in enabled drivers build config 00:01:37.677 net/octeon_ep: not in enabled drivers build config 00:01:37.677 net/pcap: not in enabled drivers build config 00:01:37.677 net/pfe: not in enabled drivers build config 00:01:37.677 net/qede: not in enabled drivers build config 00:01:37.677 net/ring: not in enabled drivers build config 00:01:37.677 net/sfc: not in enabled drivers build config 00:01:37.677 net/softnic: not in enabled drivers build config 00:01:37.677 net/tap: not in enabled drivers build config 00:01:37.677 net/thunderx: not in enabled drivers build config 00:01:37.677 net/txgbe: not in enabled drivers build config 00:01:37.677 net/vdev_netvsc: not in enabled drivers build config 00:01:37.677 net/vhost: not in enabled drivers build config 00:01:37.677 net/virtio: not in enabled drivers build config 00:01:37.677 net/vmxnet3: not in enabled drivers build config 00:01:37.677 raw/*: missing internal dependency, "rawdev" 00:01:37.677 crypto/armv8: not in enabled drivers build config 00:01:37.677 crypto/bcmfs: not in enabled drivers build config 00:01:37.677 crypto/caam_jr: not in enabled drivers build config 00:01:37.677 crypto/ccp: not in enabled drivers build config 00:01:37.677 crypto/cnxk: not in enabled drivers build config 00:01:37.677 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.677 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.677 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.677 crypto/mlx5: not in enabled drivers build config 00:01:37.677 crypto/mvsam: not in enabled drivers build config 00:01:37.677 crypto/nitrox: not in enabled drivers build config 00:01:37.677 crypto/null: not in enabled drivers build config 00:01:37.677 crypto/octeontx: not in enabled drivers build config 00:01:37.677 crypto/openssl: not in enabled drivers build config 00:01:37.677 crypto/scheduler: not in enabled drivers build config 00:01:37.677 crypto/uadk: not in enabled drivers build config 00:01:37.677 crypto/virtio: not in enabled drivers build config 00:01:37.677 compress/isal: not in enabled drivers build config 00:01:37.677 compress/mlx5: not in enabled drivers build config 00:01:37.677 compress/nitrox: not in enabled drivers build config 00:01:37.677 compress/octeontx: not in enabled drivers build config 00:01:37.677 compress/zlib: not in enabled drivers build config 00:01:37.677 regex/*: missing internal dependency, "regexdev" 00:01:37.677 ml/*: missing internal dependency, "mldev" 00:01:37.677 vdpa/ifc: not in enabled drivers build config 00:01:37.677 vdpa/mlx5: not in enabled drivers build config 00:01:37.677 vdpa/nfp: not in enabled drivers build config 00:01:37.677 vdpa/sfc: not in enabled drivers build config 00:01:37.677 event/*: missing internal dependency, "eventdev" 00:01:37.677 baseband/*: missing internal dependency, "bbdev" 00:01:37.677 gpu/*: missing internal dependency, "gpudev" 00:01:37.677 00:01:37.677 00:01:37.677 Build targets in project: 85 00:01:37.677 00:01:37.677 DPDK 24.03.0 00:01:37.677 00:01:37.677 User defined options 00:01:37.677 buildtype : debug 00:01:37.677 default_library : static 00:01:37.677 libdir : lib 00:01:37.677 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:37.677 c_args : -fPIC -Werror 00:01:37.677 c_link_args : 00:01:37.677 cpu_instruction_set: native 00:01:37.677 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:37.678 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:37.678 enable_docs : false 00:01:37.678 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:37.678 enable_kmods : false 00:01:37.678 max_lcores : 128 00:01:37.678 tests : false 00:01:37.678 00:01:37.678 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.678 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.678 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.678 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.678 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.678 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.678 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.678 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.678 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.678 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.678 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.678 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.678 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.678 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.678 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.678 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.678 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.678 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.678 [17/268] Linking static target lib/librte_kvargs.a 00:01:37.678 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.678 [19/268] Linking static target lib/librte_log.a 00:01:37.678 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.936 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.936 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.936 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.936 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.936 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.936 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.937 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.937 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.937 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.937 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.937 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.937 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.937 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.937 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.937 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.937 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.937 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.937 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.937 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.937 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.937 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.937 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.937 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.937 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.937 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.937 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.937 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.937 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.937 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.937 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.937 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.937 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.937 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.937 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.937 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:37.937 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.937 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.937 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.937 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.937 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.937 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.937 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.937 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.937 [64/268] Linking static target lib/librte_telemetry.a 00:01:37.937 [65/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.937 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.937 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:37.937 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:37.937 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.937 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.937 [71/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.937 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.937 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:37.937 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.937 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.937 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.937 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.937 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.937 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.937 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.937 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.937 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.937 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.937 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.937 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.937 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:37.937 [87/268] Linking static target lib/librte_ring.a 00:01:37.937 [88/268] Linking static target lib/librte_pci.a 00:01:37.937 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.937 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.937 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.937 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.196 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.196 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.196 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.196 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:38.196 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.196 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.196 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.196 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.196 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.196 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.196 [103/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.196 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.196 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.196 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.196 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.196 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.196 [109/268] Linking static target lib/librte_eal.a 00:01:38.196 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.196 [111/268] Linking static target lib/librte_mempool.a 00:01:38.196 [112/268] Linking static target lib/librte_rcu.a 00:01:38.196 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.196 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.196 [115/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.196 [116/268] Linking target lib/librte_log.so.24.1 00:01:38.196 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.196 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.454 [119/268] Linking static target lib/librte_mbuf.a 00:01:38.454 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.454 [121/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.454 [122/268] Linking static target lib/librte_net.a 00:01:38.454 [123/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:38.454 [124/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:38.454 [125/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.454 [126/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.454 [127/268] Linking static target lib/librte_meter.a 00:01:38.454 [128/268] Linking target lib/librte_kvargs.so.24.1 00:01:38.454 [129/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.454 [130/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.454 [131/268] Linking static target lib/librte_timer.a 00:01:38.454 [132/268] Linking target lib/librte_telemetry.so.24.1 00:01:38.454 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.454 [134/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.454 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.454 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.454 [137/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.713 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.713 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.713 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.713 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.713 [142/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.713 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:38.713 [144/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.713 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.713 [146/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.713 [147/268] Linking static target lib/librte_cmdline.a 00:01:38.713 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:38.713 [149/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.713 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.713 [151/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.713 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.713 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.713 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.713 [155/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:38.713 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:38.713 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.713 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:38.713 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.713 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.713 [161/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:38.713 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:38.713 [163/268] Linking static target lib/librte_dmadev.a 00:01:38.713 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.713 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.713 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:38.713 [167/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.713 [168/268] Linking static target lib/librte_compressdev.a 00:01:38.713 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.713 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:38.713 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:38.713 [172/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:38.713 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.713 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.713 [175/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.713 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.713 [177/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:38.713 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:38.713 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.713 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.713 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.713 [182/268] Linking static target lib/librte_power.a 00:01:38.713 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.713 [184/268] Linking static target lib/librte_reorder.a 00:01:38.713 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.713 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.713 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.713 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.713 [189/268] Linking static target lib/librte_security.a 00:01:38.713 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.971 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.971 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.972 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.972 [194/268] Linking static target lib/librte_hash.a 00:01:38.972 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.972 [196/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.972 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.972 [198/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.972 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.972 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.972 [201/268] Linking static target lib/librte_cryptodev.a 00:01:38.972 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.972 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.972 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:38.972 [205/268] Linking static target drivers/librte_bus_vdev.a 00:01:38.972 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.972 [207/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.972 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.972 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.972 [210/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.972 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:38.972 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.230 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.230 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.230 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:39.230 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.230 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.230 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.230 [219/268] Linking static target lib/librte_ethdev.a 00:01:39.489 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.489 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.489 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.489 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.747 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:39.747 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.747 [226/268] Linking static target lib/librte_vhost.a 00:01:40.007 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.007 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.007 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.387 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.325 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.451 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.388 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.388 [234/268] Linking target lib/librte_eal.so.24.1 00:01:51.388 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:51.388 [236/268] Linking target lib/librte_ring.so.24.1 00:01:51.388 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:51.388 [238/268] Linking target lib/librte_meter.so.24.1 00:01:51.388 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:51.647 [240/268] Linking target lib/librte_pci.so.24.1 00:01:51.647 [241/268] Linking target lib/librte_timer.so.24.1 00:01:51.647 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:51.647 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:51.647 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:51.647 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:51.647 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:51.647 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:51.647 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:51.647 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:51.907 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:51.907 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:51.907 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:51.907 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:52.166 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.167 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:52.167 [256/268] Linking target lib/librte_net.so.24.1 00:01:52.167 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:52.167 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:52.167 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:52.426 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:52.426 [261/268] Linking target lib/librte_hash.so.24.1 00:01:52.426 [262/268] Linking target lib/librte_security.so.24.1 00:01:52.426 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:52.426 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:52.426 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:52.426 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:52.686 [267/268] Linking target lib/librte_power.so.24.1 00:01:52.686 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:52.686 INFO: autodetecting backend as ninja 00:01:52.686 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:01:53.622 CC lib/ut/ut.o 00:01:53.622 CC lib/ut_mock/mock.o 00:01:53.622 CC lib/log/log.o 00:01:53.622 CC lib/log/log_flags.o 00:01:53.622 CC lib/log/log_deprecated.o 00:01:53.622 LIB libspdk_ut.a 00:01:53.881 LIB libspdk_ut_mock.a 00:01:53.881 LIB libspdk_log.a 00:01:54.140 CC lib/util/base64.o 00:01:54.140 CC lib/util/bit_array.o 00:01:54.140 CC lib/util/crc16.o 00:01:54.140 CC lib/util/cpuset.o 00:01:54.140 CXX lib/trace_parser/trace.o 00:01:54.140 CC lib/util/crc32.o 00:01:54.140 CC lib/util/crc32_ieee.o 00:01:54.140 CC lib/util/crc32c.o 00:01:54.140 CC lib/util/crc64.o 00:01:54.140 CC lib/util/dif.o 00:01:54.140 CC lib/util/fd.o 00:01:54.140 CC lib/util/file.o 00:01:54.140 CC lib/dma/dma.o 00:01:54.140 CC lib/ioat/ioat.o 00:01:54.140 CC lib/util/hexlify.o 00:01:54.140 CC lib/util/iov.o 00:01:54.140 CC lib/util/math.o 00:01:54.140 CC lib/util/pipe.o 00:01:54.140 CC lib/util/strerror_tls.o 00:01:54.140 CC lib/util/string.o 00:01:54.140 CC lib/util/uuid.o 00:01:54.140 CC lib/util/fd_group.o 00:01:54.140 CC lib/util/xor.o 00:01:54.140 CC lib/util/zipf.o 00:01:54.140 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.140 CC lib/vfio_user/host/vfio_user.o 00:01:54.140 LIB libspdk_dma.a 00:01:54.399 LIB libspdk_ioat.a 00:01:54.399 LIB libspdk_vfio_user.a 00:01:54.399 LIB libspdk_util.a 00:01:54.657 LIB libspdk_trace_parser.a 00:01:54.657 CC lib/conf/conf.o 00:01:54.657 CC lib/json/json_parse.o 00:01:54.657 CC lib/json/json_write.o 00:01:54.657 CC lib/json/json_util.o 00:01:54.657 CC lib/vmd/vmd.o 00:01:54.657 CC lib/vmd/led.o 00:01:54.657 CC lib/idxd/idxd.o 00:01:54.657 CC lib/idxd/idxd_user.o 00:01:54.657 CC lib/idxd/idxd_kernel.o 00:01:54.657 CC lib/rdma_utils/rdma_utils.o 00:01:54.657 CC lib/env_dpdk/env.o 00:01:54.657 CC lib/rdma_provider/common.o 00:01:54.657 CC lib/env_dpdk/memory.o 00:01:54.657 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:54.657 CC lib/env_dpdk/pci.o 00:01:54.657 CC lib/env_dpdk/init.o 00:01:54.657 CC lib/env_dpdk/threads.o 00:01:54.657 CC lib/env_dpdk/pci_ioat.o 00:01:54.657 CC lib/env_dpdk/pci_virtio.o 00:01:54.657 CC lib/env_dpdk/pci_vmd.o 00:01:54.657 CC lib/env_dpdk/pci_idxd.o 00:01:54.657 CC lib/env_dpdk/pci_event.o 00:01:54.657 CC lib/env_dpdk/sigbus_handler.o 00:01:54.657 CC lib/env_dpdk/pci_dpdk.o 00:01:54.657 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:54.657 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:54.916 LIB libspdk_conf.a 00:01:54.916 LIB libspdk_rdma_provider.a 00:01:54.916 LIB libspdk_json.a 00:01:54.916 LIB libspdk_rdma_utils.a 00:01:55.176 LIB libspdk_idxd.a 00:01:55.176 LIB libspdk_vmd.a 00:01:55.176 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:55.176 CC lib/jsonrpc/jsonrpc_server.o 00:01:55.176 CC lib/jsonrpc/jsonrpc_client.o 00:01:55.176 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:55.434 LIB libspdk_jsonrpc.a 00:01:55.693 LIB libspdk_env_dpdk.a 00:01:55.693 CC lib/rpc/rpc.o 00:01:55.953 LIB libspdk_rpc.a 00:01:56.212 CC lib/trace/trace.o 00:01:56.212 CC lib/trace/trace_flags.o 00:01:56.212 CC lib/trace/trace_rpc.o 00:01:56.212 CC lib/notify/notify.o 00:01:56.212 CC lib/notify/notify_rpc.o 00:01:56.212 CC lib/keyring/keyring.o 00:01:56.212 CC lib/keyring/keyring_rpc.o 00:01:56.471 LIB libspdk_notify.a 00:01:56.471 LIB libspdk_trace.a 00:01:56.471 LIB libspdk_keyring.a 00:01:56.730 CC lib/sock/sock.o 00:01:56.730 CC lib/sock/sock_rpc.o 00:01:56.730 CC lib/thread/thread.o 00:01:56.730 CC lib/thread/iobuf.o 00:01:56.988 LIB libspdk_sock.a 00:01:57.247 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:57.247 CC lib/nvme/nvme_ctrlr.o 00:01:57.247 CC lib/nvme/nvme_fabric.o 00:01:57.247 CC lib/nvme/nvme_ns_cmd.o 00:01:57.247 CC lib/nvme/nvme_ns.o 00:01:57.247 CC lib/nvme/nvme_pcie_common.o 00:01:57.247 CC lib/nvme/nvme_pcie.o 00:01:57.247 CC lib/nvme/nvme_qpair.o 00:01:57.247 CC lib/nvme/nvme.o 00:01:57.247 CC lib/nvme/nvme_quirks.o 00:01:57.247 CC lib/nvme/nvme_transport.o 00:01:57.247 CC lib/nvme/nvme_discovery.o 00:01:57.247 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:57.247 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:57.247 CC lib/nvme/nvme_tcp.o 00:01:57.247 CC lib/nvme/nvme_opal.o 00:01:57.247 CC lib/nvme/nvme_io_msg.o 00:01:57.247 CC lib/nvme/nvme_poll_group.o 00:01:57.247 CC lib/nvme/nvme_zns.o 00:01:57.247 CC lib/nvme/nvme_stubs.o 00:01:57.247 CC lib/nvme/nvme_auth.o 00:01:57.247 CC lib/nvme/nvme_cuse.o 00:01:57.247 CC lib/nvme/nvme_vfio_user.o 00:01:57.247 CC lib/nvme/nvme_rdma.o 00:01:57.504 LIB libspdk_thread.a 00:01:57.762 CC lib/init/subsystem.o 00:01:57.762 CC lib/init/subsystem_rpc.o 00:01:57.762 CC lib/init/json_config.o 00:01:57.762 CC lib/init/rpc.o 00:01:57.762 CC lib/vfu_tgt/tgt_endpoint.o 00:01:57.762 CC lib/blob/blobstore.o 00:01:57.762 CC lib/blob/request.o 00:01:57.762 CC lib/accel/accel.o 00:01:57.762 CC lib/vfu_tgt/tgt_rpc.o 00:01:57.762 CC lib/blob/zeroes.o 00:01:57.762 CC lib/accel/accel_rpc.o 00:01:57.762 CC lib/accel/accel_sw.o 00:01:57.762 CC lib/blob/blob_bs_dev.o 00:01:57.762 CC lib/virtio/virtio.o 00:01:57.762 CC lib/virtio/virtio_pci.o 00:01:57.762 CC lib/virtio/virtio_vhost_user.o 00:01:57.762 CC lib/virtio/virtio_vfio_user.o 00:01:58.020 LIB libspdk_init.a 00:01:58.020 LIB libspdk_vfu_tgt.a 00:01:58.020 LIB libspdk_virtio.a 00:01:58.277 CC lib/event/app.o 00:01:58.277 CC lib/event/reactor.o 00:01:58.277 CC lib/event/log_rpc.o 00:01:58.277 CC lib/event/app_rpc.o 00:01:58.277 CC lib/event/scheduler_static.o 00:01:58.567 LIB libspdk_accel.a 00:01:58.567 LIB libspdk_event.a 00:01:58.848 LIB libspdk_nvme.a 00:01:58.848 CC lib/bdev/bdev.o 00:01:58.848 CC lib/bdev/bdev_rpc.o 00:01:58.848 CC lib/bdev/bdev_zone.o 00:01:58.848 CC lib/bdev/part.o 00:01:58.848 CC lib/bdev/scsi_nvme.o 00:01:59.782 LIB libspdk_blob.a 00:02:00.040 CC lib/lvol/lvol.o 00:02:00.040 CC lib/blobfs/blobfs.o 00:02:00.040 CC lib/blobfs/tree.o 00:02:00.606 LIB libspdk_lvol.a 00:02:00.607 LIB libspdk_blobfs.a 00:02:00.607 LIB libspdk_bdev.a 00:02:01.174 CC lib/scsi/dev.o 00:02:01.174 CC lib/scsi/lun.o 00:02:01.174 CC lib/scsi/scsi.o 00:02:01.174 CC lib/scsi/port.o 00:02:01.174 CC lib/scsi/scsi_bdev.o 00:02:01.174 CC lib/nbd/nbd.o 00:02:01.174 CC lib/nbd/nbd_rpc.o 00:02:01.175 CC lib/scsi/scsi_rpc.o 00:02:01.175 CC lib/scsi/scsi_pr.o 00:02:01.175 CC lib/scsi/task.o 00:02:01.175 CC lib/nvmf/ctrlr.o 00:02:01.175 CC lib/nvmf/ctrlr_discovery.o 00:02:01.175 CC lib/nvmf/ctrlr_bdev.o 00:02:01.175 CC lib/nvmf/subsystem.o 00:02:01.175 CC lib/ublk/ublk.o 00:02:01.175 CC lib/nvmf/nvmf.o 00:02:01.175 CC lib/nvmf/nvmf_rpc.o 00:02:01.175 CC lib/ublk/ublk_rpc.o 00:02:01.175 CC lib/nvmf/transport.o 00:02:01.175 CC lib/ftl/ftl_core.o 00:02:01.175 CC lib/nvmf/tcp.o 00:02:01.175 CC lib/ftl/ftl_init.o 00:02:01.175 CC lib/nvmf/stubs.o 00:02:01.175 CC lib/ftl/ftl_layout.o 00:02:01.175 CC lib/ftl/ftl_debug.o 00:02:01.175 CC lib/nvmf/mdns_server.o 00:02:01.175 CC lib/ftl/ftl_sb.o 00:02:01.175 CC lib/ftl/ftl_io.o 00:02:01.175 CC lib/nvmf/vfio_user.o 00:02:01.175 CC lib/nvmf/rdma.o 00:02:01.175 CC lib/nvmf/auth.o 00:02:01.175 CC lib/ftl/ftl_l2p.o 00:02:01.175 CC lib/ftl/ftl_l2p_flat.o 00:02:01.175 CC lib/ftl/ftl_nv_cache.o 00:02:01.175 CC lib/ftl/ftl_band.o 00:02:01.175 CC lib/ftl/ftl_band_ops.o 00:02:01.175 CC lib/ftl/ftl_writer.o 00:02:01.175 CC lib/ftl/ftl_rq.o 00:02:01.175 CC lib/ftl/ftl_l2p_cache.o 00:02:01.175 CC lib/ftl/ftl_reloc.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt.o 00:02:01.175 CC lib/ftl/ftl_p2l.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:01.175 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:01.175 CC lib/ftl/utils/ftl_conf.o 00:02:01.175 CC lib/ftl/utils/ftl_md.o 00:02:01.175 CC lib/ftl/utils/ftl_mempool.o 00:02:01.175 CC lib/ftl/utils/ftl_bitmap.o 00:02:01.175 CC lib/ftl/utils/ftl_property.o 00:02:01.175 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:01.175 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:01.175 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:01.175 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:01.175 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:01.175 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:01.175 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:01.175 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:01.175 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:01.175 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:01.175 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:01.175 CC lib/ftl/base/ftl_base_dev.o 00:02:01.175 CC lib/ftl/base/ftl_base_bdev.o 00:02:01.175 CC lib/ftl/ftl_trace.o 00:02:01.434 LIB libspdk_nbd.a 00:02:01.693 LIB libspdk_scsi.a 00:02:01.693 LIB libspdk_ublk.a 00:02:01.953 CC lib/vhost/vhost.o 00:02:01.953 CC lib/vhost/vhost_rpc.o 00:02:01.953 CC lib/vhost/vhost_scsi.o 00:02:01.953 CC lib/vhost/rte_vhost_user.o 00:02:01.953 CC lib/vhost/vhost_blk.o 00:02:01.953 CC lib/iscsi/conn.o 00:02:01.953 CC lib/iscsi/init_grp.o 00:02:01.953 CC lib/iscsi/iscsi.o 00:02:01.953 CC lib/iscsi/md5.o 00:02:01.953 CC lib/iscsi/param.o 00:02:01.953 CC lib/iscsi/iscsi_subsystem.o 00:02:01.953 CC lib/iscsi/portal_grp.o 00:02:01.953 CC lib/iscsi/tgt_node.o 00:02:01.953 CC lib/iscsi/iscsi_rpc.o 00:02:01.953 CC lib/iscsi/task.o 00:02:01.953 LIB libspdk_ftl.a 00:02:02.520 LIB libspdk_nvmf.a 00:02:02.520 LIB libspdk_vhost.a 00:02:02.779 LIB libspdk_iscsi.a 00:02:03.345 CC module/env_dpdk/env_dpdk_rpc.o 00:02:03.345 CC module/vfu_device/vfu_virtio.o 00:02:03.345 CC module/vfu_device/vfu_virtio_scsi.o 00:02:03.345 CC module/vfu_device/vfu_virtio_blk.o 00:02:03.345 CC module/vfu_device/vfu_virtio_rpc.o 00:02:03.345 LIB libspdk_env_dpdk_rpc.a 00:02:03.345 CC module/keyring/file/keyring_rpc.o 00:02:03.345 CC module/keyring/file/keyring.o 00:02:03.345 CC module/accel/ioat/accel_ioat.o 00:02:03.345 CC module/accel/ioat/accel_ioat_rpc.o 00:02:03.345 CC module/sock/posix/posix.o 00:02:03.345 CC module/accel/dsa/accel_dsa.o 00:02:03.345 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:03.345 CC module/accel/dsa/accel_dsa_rpc.o 00:02:03.345 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:03.345 CC module/scheduler/gscheduler/gscheduler.o 00:02:03.345 CC module/accel/iaa/accel_iaa.o 00:02:03.345 CC module/accel/iaa/accel_iaa_rpc.o 00:02:03.345 CC module/accel/error/accel_error.o 00:02:03.345 CC module/blob/bdev/blob_bdev.o 00:02:03.345 CC module/accel/error/accel_error_rpc.o 00:02:03.345 CC module/keyring/linux/keyring_rpc.o 00:02:03.345 CC module/keyring/linux/keyring.o 00:02:03.345 LIB libspdk_keyring_file.a 00:02:03.345 LIB libspdk_scheduler_gscheduler.a 00:02:03.345 LIB libspdk_scheduler_dpdk_governor.a 00:02:03.603 LIB libspdk_accel_ioat.a 00:02:03.603 LIB libspdk_keyring_linux.a 00:02:03.603 LIB libspdk_accel_error.a 00:02:03.603 LIB libspdk_scheduler_dynamic.a 00:02:03.603 LIB libspdk_accel_iaa.a 00:02:03.603 LIB libspdk_accel_dsa.a 00:02:03.603 LIB libspdk_blob_bdev.a 00:02:03.603 LIB libspdk_vfu_device.a 00:02:03.864 LIB libspdk_sock_posix.a 00:02:03.864 CC module/bdev/null/bdev_null.o 00:02:03.864 CC module/bdev/null/bdev_null_rpc.o 00:02:03.864 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:03.864 CC module/bdev/lvol/vbdev_lvol.o 00:02:03.864 CC module/bdev/ftl/bdev_ftl.o 00:02:03.864 CC module/bdev/split/vbdev_split.o 00:02:03.864 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:03.864 CC module/bdev/error/vbdev_error.o 00:02:03.864 CC module/bdev/split/vbdev_split_rpc.o 00:02:03.864 CC module/bdev/error/vbdev_error_rpc.o 00:02:03.864 CC module/bdev/raid/bdev_raid.o 00:02:03.864 CC module/bdev/raid/bdev_raid_rpc.o 00:02:03.864 CC module/bdev/raid/raid0.o 00:02:03.864 CC module/bdev/raid/bdev_raid_sb.o 00:02:03.864 CC module/bdev/raid/raid1.o 00:02:03.864 CC module/bdev/passthru/vbdev_passthru.o 00:02:03.864 CC module/bdev/delay/vbdev_delay.o 00:02:03.864 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:03.864 CC module/bdev/raid/concat.o 00:02:03.864 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:03.864 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:03.864 CC module/bdev/iscsi/bdev_iscsi.o 00:02:03.864 CC module/bdev/nvme/bdev_nvme.o 00:02:03.864 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:03.864 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:03.864 CC module/bdev/aio/bdev_aio.o 00:02:03.864 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:03.864 CC module/bdev/nvme/bdev_mdns_client.o 00:02:03.864 CC module/bdev/gpt/gpt.o 00:02:03.864 CC module/bdev/aio/bdev_aio_rpc.o 00:02:03.864 CC module/bdev/nvme/nvme_rpc.o 00:02:03.864 CC module/bdev/malloc/bdev_malloc.o 00:02:03.864 CC module/bdev/gpt/vbdev_gpt.o 00:02:03.864 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:03.864 CC module/bdev/nvme/vbdev_opal.o 00:02:03.864 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:03.864 CC module/blobfs/bdev/blobfs_bdev.o 00:02:03.864 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:03.864 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:04.123 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:04.123 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:04.123 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:04.123 LIB libspdk_blobfs_bdev.a 00:02:04.123 LIB libspdk_bdev_null.a 00:02:04.123 LIB libspdk_bdev_split.a 00:02:04.123 LIB libspdk_bdev_ftl.a 00:02:04.123 LIB libspdk_bdev_gpt.a 00:02:04.123 LIB libspdk_bdev_passthru.a 00:02:04.382 LIB libspdk_bdev_error.a 00:02:04.382 LIB libspdk_bdev_malloc.a 00:02:04.382 LIB libspdk_bdev_zone_block.a 00:02:04.382 LIB libspdk_bdev_lvol.a 00:02:04.382 LIB libspdk_bdev_aio.a 00:02:04.382 LIB libspdk_bdev_delay.a 00:02:04.382 LIB libspdk_bdev_iscsi.a 00:02:04.382 LIB libspdk_bdev_virtio.a 00:02:04.641 LIB libspdk_bdev_raid.a 00:02:05.209 LIB libspdk_bdev_nvme.a 00:02:06.153 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:06.153 CC module/event/subsystems/iobuf/iobuf.o 00:02:06.153 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:06.153 CC module/event/subsystems/keyring/keyring.o 00:02:06.153 CC module/event/subsystems/scheduler/scheduler.o 00:02:06.153 CC module/event/subsystems/sock/sock.o 00:02:06.153 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:06.153 CC module/event/subsystems/vmd/vmd.o 00:02:06.153 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:06.153 LIB libspdk_event_vfu_tgt.a 00:02:06.153 LIB libspdk_event_keyring.a 00:02:06.153 LIB libspdk_event_scheduler.a 00:02:06.153 LIB libspdk_event_iobuf.a 00:02:06.153 LIB libspdk_event_vhost_blk.a 00:02:06.153 LIB libspdk_event_sock.a 00:02:06.153 LIB libspdk_event_vmd.a 00:02:06.411 CC module/event/subsystems/accel/accel.o 00:02:06.411 LIB libspdk_event_accel.a 00:02:06.980 CC module/event/subsystems/bdev/bdev.o 00:02:06.980 LIB libspdk_event_bdev.a 00:02:07.239 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:07.239 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:07.239 CC module/event/subsystems/nbd/nbd.o 00:02:07.239 CC module/event/subsystems/scsi/scsi.o 00:02:07.239 CC module/event/subsystems/ublk/ublk.o 00:02:07.496 LIB libspdk_event_nbd.a 00:02:07.496 LIB libspdk_event_ublk.a 00:02:07.496 LIB libspdk_event_scsi.a 00:02:07.496 LIB libspdk_event_nvmf.a 00:02:07.753 CC module/event/subsystems/iscsi/iscsi.o 00:02:07.753 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:07.753 LIB libspdk_event_iscsi.a 00:02:07.753 LIB libspdk_event_vhost_scsi.a 00:02:08.325 CC app/spdk_nvme_identify/identify.o 00:02:08.325 CC app/trace_record/trace_record.o 00:02:08.325 CC app/spdk_lspci/spdk_lspci.o 00:02:08.325 CC test/rpc_client/rpc_client_test.o 00:02:08.325 CC app/spdk_nvme_perf/perf.o 00:02:08.325 CXX app/trace/trace.o 00:02:08.325 TEST_HEADER include/spdk/accel.h 00:02:08.325 TEST_HEADER include/spdk/assert.h 00:02:08.325 TEST_HEADER include/spdk/accel_module.h 00:02:08.325 CC app/spdk_nvme_discover/discovery_aer.o 00:02:08.325 TEST_HEADER include/spdk/bdev.h 00:02:08.325 TEST_HEADER include/spdk/base64.h 00:02:08.325 TEST_HEADER include/spdk/barrier.h 00:02:08.325 TEST_HEADER include/spdk/bdev_module.h 00:02:08.325 CC app/spdk_top/spdk_top.o 00:02:08.325 TEST_HEADER include/spdk/bdev_zone.h 00:02:08.325 TEST_HEADER include/spdk/bit_array.h 00:02:08.325 TEST_HEADER include/spdk/blob_bdev.h 00:02:08.325 TEST_HEADER include/spdk/bit_pool.h 00:02:08.325 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:08.325 TEST_HEADER include/spdk/blobfs.h 00:02:08.325 TEST_HEADER include/spdk/blob.h 00:02:08.325 TEST_HEADER include/spdk/config.h 00:02:08.325 TEST_HEADER include/spdk/cpuset.h 00:02:08.325 TEST_HEADER include/spdk/conf.h 00:02:08.325 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:08.325 TEST_HEADER include/spdk/crc32.h 00:02:08.325 TEST_HEADER include/spdk/crc16.h 00:02:08.325 TEST_HEADER include/spdk/crc64.h 00:02:08.325 TEST_HEADER include/spdk/dif.h 00:02:08.325 TEST_HEADER include/spdk/dma.h 00:02:08.325 CC app/nvmf_tgt/nvmf_main.o 00:02:08.325 TEST_HEADER include/spdk/endian.h 00:02:08.325 TEST_HEADER include/spdk/env.h 00:02:08.325 TEST_HEADER include/spdk/env_dpdk.h 00:02:08.325 TEST_HEADER include/spdk/event.h 00:02:08.325 TEST_HEADER include/spdk/fd_group.h 00:02:08.325 TEST_HEADER include/spdk/fd.h 00:02:08.325 TEST_HEADER include/spdk/file.h 00:02:08.325 TEST_HEADER include/spdk/ftl.h 00:02:08.325 TEST_HEADER include/spdk/gpt_spec.h 00:02:08.325 TEST_HEADER include/spdk/hexlify.h 00:02:08.325 TEST_HEADER include/spdk/histogram_data.h 00:02:08.325 TEST_HEADER include/spdk/idxd.h 00:02:08.325 TEST_HEADER include/spdk/idxd_spec.h 00:02:08.325 TEST_HEADER include/spdk/init.h 00:02:08.325 TEST_HEADER include/spdk/ioat.h 00:02:08.325 TEST_HEADER include/spdk/ioat_spec.h 00:02:08.325 TEST_HEADER include/spdk/iscsi_spec.h 00:02:08.325 TEST_HEADER include/spdk/json.h 00:02:08.325 TEST_HEADER include/spdk/jsonrpc.h 00:02:08.325 TEST_HEADER include/spdk/keyring.h 00:02:08.325 TEST_HEADER include/spdk/keyring_module.h 00:02:08.325 TEST_HEADER include/spdk/likely.h 00:02:08.325 TEST_HEADER include/spdk/log.h 00:02:08.325 TEST_HEADER include/spdk/lvol.h 00:02:08.325 TEST_HEADER include/spdk/memory.h 00:02:08.325 TEST_HEADER include/spdk/mmio.h 00:02:08.325 TEST_HEADER include/spdk/notify.h 00:02:08.325 TEST_HEADER include/spdk/nbd.h 00:02:08.325 TEST_HEADER include/spdk/nvme.h 00:02:08.325 TEST_HEADER include/spdk/nvme_intel.h 00:02:08.325 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:08.325 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:08.325 TEST_HEADER include/spdk/nvme_spec.h 00:02:08.325 TEST_HEADER include/spdk/nvme_zns.h 00:02:08.325 CC app/spdk_dd/spdk_dd.o 00:02:08.325 CC app/iscsi_tgt/iscsi_tgt.o 00:02:08.325 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:08.325 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:08.325 TEST_HEADER include/spdk/nvmf.h 00:02:08.325 TEST_HEADER include/spdk/nvmf_transport.h 00:02:08.325 TEST_HEADER include/spdk/nvmf_spec.h 00:02:08.325 TEST_HEADER include/spdk/opal_spec.h 00:02:08.325 TEST_HEADER include/spdk/opal.h 00:02:08.325 TEST_HEADER include/spdk/pci_ids.h 00:02:08.325 TEST_HEADER include/spdk/pipe.h 00:02:08.325 TEST_HEADER include/spdk/queue.h 00:02:08.325 TEST_HEADER include/spdk/reduce.h 00:02:08.325 TEST_HEADER include/spdk/rpc.h 00:02:08.325 TEST_HEADER include/spdk/scheduler.h 00:02:08.325 TEST_HEADER include/spdk/scsi.h 00:02:08.325 TEST_HEADER include/spdk/scsi_spec.h 00:02:08.325 TEST_HEADER include/spdk/sock.h 00:02:08.325 TEST_HEADER include/spdk/stdinc.h 00:02:08.325 TEST_HEADER include/spdk/string.h 00:02:08.325 TEST_HEADER include/spdk/thread.h 00:02:08.325 TEST_HEADER include/spdk/trace.h 00:02:08.325 TEST_HEADER include/spdk/tree.h 00:02:08.325 TEST_HEADER include/spdk/trace_parser.h 00:02:08.325 TEST_HEADER include/spdk/ublk.h 00:02:08.325 TEST_HEADER include/spdk/util.h 00:02:08.325 TEST_HEADER include/spdk/uuid.h 00:02:08.325 TEST_HEADER include/spdk/version.h 00:02:08.325 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:08.325 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:08.325 TEST_HEADER include/spdk/vhost.h 00:02:08.325 TEST_HEADER include/spdk/vmd.h 00:02:08.325 TEST_HEADER include/spdk/xor.h 00:02:08.325 TEST_HEADER include/spdk/zipf.h 00:02:08.325 CXX test/cpp_headers/accel.o 00:02:08.325 CXX test/cpp_headers/accel_module.o 00:02:08.325 CXX test/cpp_headers/assert.o 00:02:08.325 CXX test/cpp_headers/barrier.o 00:02:08.325 CXX test/cpp_headers/base64.o 00:02:08.325 CXX test/cpp_headers/bdev.o 00:02:08.325 CC app/spdk_tgt/spdk_tgt.o 00:02:08.325 CXX test/cpp_headers/bdev_module.o 00:02:08.325 CXX test/cpp_headers/bdev_zone.o 00:02:08.325 CC test/app/histogram_perf/histogram_perf.o 00:02:08.325 CXX test/cpp_headers/bit_array.o 00:02:08.325 CXX test/cpp_headers/bit_pool.o 00:02:08.325 CC examples/util/zipf/zipf.o 00:02:08.325 CC test/thread/lock/spdk_lock.o 00:02:08.325 CXX test/cpp_headers/blobfs_bdev.o 00:02:08.325 CXX test/cpp_headers/blob_bdev.o 00:02:08.325 CXX test/cpp_headers/blobfs.o 00:02:08.325 CXX test/cpp_headers/blob.o 00:02:08.325 CXX test/cpp_headers/conf.o 00:02:08.325 CXX test/cpp_headers/config.o 00:02:08.325 CXX test/cpp_headers/cpuset.o 00:02:08.325 CC test/app/jsoncat/jsoncat.o 00:02:08.325 CXX test/cpp_headers/crc16.o 00:02:08.325 CXX test/cpp_headers/crc32.o 00:02:08.325 CXX test/cpp_headers/crc64.o 00:02:08.325 CXX test/cpp_headers/dif.o 00:02:08.325 CXX test/cpp_headers/dma.o 00:02:08.325 CXX test/cpp_headers/endian.o 00:02:08.325 CXX test/cpp_headers/env_dpdk.o 00:02:08.325 CXX test/cpp_headers/env.o 00:02:08.325 CXX test/cpp_headers/event.o 00:02:08.325 CC examples/ioat/verify/verify.o 00:02:08.325 CXX test/cpp_headers/fd_group.o 00:02:08.325 CC examples/ioat/perf/perf.o 00:02:08.325 CC test/thread/poller_perf/poller_perf.o 00:02:08.325 CXX test/cpp_headers/fd.o 00:02:08.325 CXX test/cpp_headers/file.o 00:02:08.325 CXX test/cpp_headers/ftl.o 00:02:08.325 CXX test/cpp_headers/gpt_spec.o 00:02:08.325 CXX test/cpp_headers/hexlify.o 00:02:08.325 CC test/app/stub/stub.o 00:02:08.325 CXX test/cpp_headers/histogram_data.o 00:02:08.325 CXX test/cpp_headers/idxd.o 00:02:08.325 CXX test/cpp_headers/idxd_spec.o 00:02:08.325 CXX test/cpp_headers/init.o 00:02:08.325 CXX test/cpp_headers/ioat.o 00:02:08.325 CC test/env/memory/memory_ut.o 00:02:08.325 CC test/env/vtophys/vtophys.o 00:02:08.325 CC test/env/pci/pci_ut.o 00:02:08.325 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:08.325 CC app/fio/nvme/fio_plugin.o 00:02:08.325 CXX test/cpp_headers/ioat_spec.o 00:02:08.325 LINK spdk_lspci 00:02:08.325 CC test/app/bdev_svc/bdev_svc.o 00:02:08.325 CC test/dma/test_dma/test_dma.o 00:02:08.325 CC app/fio/bdev/fio_plugin.o 00:02:08.325 LINK rpc_client_test 00:02:08.325 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.325 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.325 CC test/env/mem_callbacks/mem_callbacks.o 00:02:08.325 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:08.325 LINK spdk_nvme_discover 00:02:08.325 LINK spdk_trace_record 00:02:08.325 LINK interrupt_tgt 00:02:08.325 LINK nvmf_tgt 00:02:08.325 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:08.325 LINK jsoncat 00:02:08.325 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:08.325 LINK zipf 00:02:08.325 LINK histogram_perf 00:02:08.586 CXX test/cpp_headers/iscsi_spec.o 00:02:08.586 LINK poller_perf 00:02:08.586 LINK vtophys 00:02:08.586 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.586 CXX test/cpp_headers/json.o 00:02:08.586 CXX test/cpp_headers/jsonrpc.o 00:02:08.586 CXX test/cpp_headers/keyring.o 00:02:08.586 CXX test/cpp_headers/keyring_module.o 00:02:08.586 CXX test/cpp_headers/likely.o 00:02:08.586 CXX test/cpp_headers/log.o 00:02:08.586 CXX test/cpp_headers/lvol.o 00:02:08.586 CXX test/cpp_headers/memory.o 00:02:08.586 CXX test/cpp_headers/mmio.o 00:02:08.586 CXX test/cpp_headers/nbd.o 00:02:08.586 CXX test/cpp_headers/notify.o 00:02:08.586 CXX test/cpp_headers/nvme.o 00:02:08.587 CXX test/cpp_headers/nvme_intel.o 00:02:08.587 CXX test/cpp_headers/nvme_ocssd.o 00:02:08.587 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:08.587 CXX test/cpp_headers/nvme_spec.o 00:02:08.587 CXX test/cpp_headers/nvme_zns.o 00:02:08.587 CXX test/cpp_headers/nvmf_cmd.o 00:02:08.587 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:08.587 CXX test/cpp_headers/nvmf.o 00:02:08.587 LINK stub 00:02:08.587 CXX test/cpp_headers/nvmf_spec.o 00:02:08.587 CXX test/cpp_headers/nvmf_transport.o 00:02:08.587 CXX test/cpp_headers/opal.o 00:02:08.587 CXX test/cpp_headers/opal_spec.o 00:02:08.587 CXX test/cpp_headers/pci_ids.o 00:02:08.587 LINK iscsi_tgt 00:02:08.587 CXX test/cpp_headers/pipe.o 00:02:08.587 CXX test/cpp_headers/queue.o 00:02:08.587 CXX test/cpp_headers/reduce.o 00:02:08.587 CXX test/cpp_headers/rpc.o 00:02:08.587 LINK env_dpdk_post_init 00:02:08.587 CXX test/cpp_headers/scheduler.o 00:02:08.587 CXX test/cpp_headers/scsi.o 00:02:08.587 CXX test/cpp_headers/scsi_spec.o 00:02:08.587 CXX test/cpp_headers/sock.o 00:02:08.587 CXX test/cpp_headers/stdinc.o 00:02:08.587 LINK ioat_perf 00:02:08.587 CXX test/cpp_headers/string.o 00:02:08.587 LINK verify 00:02:08.587 CXX test/cpp_headers/thread.o 00:02:08.587 CXX test/cpp_headers/trace.o 00:02:08.587 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:08.587 struct spdk_nvme_fdp_ruhs ruhs; 00:02:08.587 ^ 00:02:08.587 CXX test/cpp_headers/trace_parser.o 00:02:08.587 LINK spdk_tgt 00:02:08.587 LINK bdev_svc 00:02:08.587 LINK spdk_trace 00:02:08.587 CXX test/cpp_headers/tree.o 00:02:08.587 CXX test/cpp_headers/ublk.o 00:02:08.587 CXX test/cpp_headers/util.o 00:02:08.587 CXX test/cpp_headers/version.o 00:02:08.587 CXX test/cpp_headers/uuid.o 00:02:08.587 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.587 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.587 CXX test/cpp_headers/vhost.o 00:02:08.587 CXX test/cpp_headers/vmd.o 00:02:08.587 CXX test/cpp_headers/xor.o 00:02:08.587 CXX test/cpp_headers/zipf.o 00:02:08.848 LINK test_dma 00:02:08.848 LINK llvm_vfio_fuzz 00:02:08.848 LINK spdk_dd 00:02:08.848 LINK nvme_fuzz 00:02:08.848 LINK pci_ut 00:02:08.848 LINK spdk_nvme_identify 00:02:08.848 LINK vhost_fuzz 00:02:08.848 1 warning generated. 00:02:08.848 LINK spdk_bdev 00:02:09.106 LINK spdk_nvme_perf 00:02:09.106 LINK mem_callbacks 00:02:09.106 LINK spdk_nvme 00:02:09.106 LINK llvm_nvme_fuzz 00:02:09.106 LINK spdk_top 00:02:09.106 CC examples/idxd/perf/perf.o 00:02:09.106 CC examples/vmd/led/led.o 00:02:09.106 CC examples/vmd/lsvmd/lsvmd.o 00:02:09.106 CC examples/sock/hello_world/hello_sock.o 00:02:09.106 CC examples/thread/thread/thread_ex.o 00:02:09.106 CC app/vhost/vhost.o 00:02:09.365 LINK led 00:02:09.365 LINK lsvmd 00:02:09.365 LINK memory_ut 00:02:09.365 LINK vhost 00:02:09.365 LINK hello_sock 00:02:09.365 LINK idxd_perf 00:02:09.365 LINK thread 00:02:09.624 LINK spdk_lock 00:02:09.883 LINK iscsi_fuzz 00:02:10.141 CC examples/nvme/arbitration/arbitration.o 00:02:10.141 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:10.141 CC examples/nvme/reconnect/reconnect.o 00:02:10.141 CC examples/nvme/hello_world/hello_world.o 00:02:10.141 CC examples/nvme/abort/abort.o 00:02:10.141 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:10.141 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:10.141 CC examples/nvme/hotplug/hotplug.o 00:02:10.141 CC test/event/reactor/reactor.o 00:02:10.141 CC test/event/reactor_perf/reactor_perf.o 00:02:10.141 CC test/event/app_repeat/app_repeat.o 00:02:10.141 CC test/event/event_perf/event_perf.o 00:02:10.141 CC test/event/scheduler/scheduler.o 00:02:10.141 LINK pmr_persistence 00:02:10.400 LINK hello_world 00:02:10.400 LINK cmb_copy 00:02:10.400 LINK hotplug 00:02:10.400 LINK reactor 00:02:10.400 LINK reconnect 00:02:10.400 LINK arbitration 00:02:10.400 LINK reactor_perf 00:02:10.400 LINK event_perf 00:02:10.400 LINK abort 00:02:10.400 LINK app_repeat 00:02:10.400 LINK nvme_manage 00:02:10.400 LINK scheduler 00:02:10.966 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:10.966 CC test/nvme/boot_partition/boot_partition.o 00:02:10.966 CC test/nvme/sgl/sgl.o 00:02:10.966 CC test/nvme/startup/startup.o 00:02:10.966 CC test/nvme/err_injection/err_injection.o 00:02:10.966 CC test/nvme/simple_copy/simple_copy.o 00:02:10.966 CC test/nvme/reset/reset.o 00:02:10.966 CC test/nvme/connect_stress/connect_stress.o 00:02:10.966 CC test/nvme/reserve/reserve.o 00:02:10.966 CC test/nvme/overhead/overhead.o 00:02:10.966 CC test/nvme/e2edp/nvme_dp.o 00:02:10.966 CC test/nvme/fused_ordering/fused_ordering.o 00:02:10.966 CC test/nvme/aer/aer.o 00:02:10.966 CC test/nvme/fdp/fdp.o 00:02:10.966 CC test/nvme/cuse/cuse.o 00:02:10.966 CC test/nvme/compliance/nvme_compliance.o 00:02:10.966 CC test/blobfs/mkfs/mkfs.o 00:02:10.966 CC test/accel/dif/dif.o 00:02:10.966 CC test/lvol/esnap/esnap.o 00:02:10.966 LINK startup 00:02:10.966 LINK boot_partition 00:02:10.966 LINK doorbell_aers 00:02:10.966 LINK connect_stress 00:02:10.966 LINK err_injection 00:02:10.966 LINK reserve 00:02:10.966 LINK fused_ordering 00:02:10.966 LINK simple_copy 00:02:10.966 LINK mkfs 00:02:10.966 LINK aer 00:02:10.966 LINK nvme_dp 00:02:10.966 LINK reset 00:02:10.966 LINK sgl 00:02:10.966 LINK overhead 00:02:10.966 LINK fdp 00:02:11.223 LINK nvme_compliance 00:02:11.223 LINK dif 00:02:11.223 CC examples/accel/perf/accel_perf.o 00:02:11.223 CC examples/blob/cli/blobcli.o 00:02:11.223 CC examples/blob/hello_world/hello_blob.o 00:02:11.480 LINK hello_blob 00:02:11.480 LINK accel_perf 00:02:11.738 LINK blobcli 00:02:11.738 LINK cuse 00:02:12.304 CC examples/bdev/bdevperf/bdevperf.o 00:02:12.304 CC examples/bdev/hello_world/hello_bdev.o 00:02:12.563 LINK hello_bdev 00:02:12.821 LINK bdevperf 00:02:12.821 CC test/bdev/bdevio/bdevio.o 00:02:13.079 LINK bdevio 00:02:14.454 LINK esnap 00:02:14.454 CC examples/nvmf/nvmf/nvmf.o 00:02:14.713 LINK nvmf 00:02:16.093 00:02:16.093 real 0m47.901s 00:02:16.093 user 6m12.710s 00:02:16.093 sys 2m25.975s 00:02:16.093 14:30:52 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:16.093 14:30:52 make -- common/autotest_common.sh@10 -- $ set +x 00:02:16.093 ************************************ 00:02:16.093 END TEST make 00:02:16.093 ************************************ 00:02:16.093 14:30:52 -- common/autotest_common.sh@1142 -- $ return 0 00:02:16.093 14:30:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:16.093 14:30:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:16.093 14:30:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:16.093 14:30:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.093 14:30:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:16.093 14:30:52 -- pm/common@44 -- $ pid=1298565 00:02:16.093 14:30:52 -- pm/common@50 -- $ kill -TERM 1298565 00:02:16.093 14:30:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.093 14:30:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:16.093 14:30:52 -- pm/common@44 -- $ pid=1298567 00:02:16.093 14:30:52 -- pm/common@50 -- $ kill -TERM 1298567 00:02:16.093 14:30:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.093 14:30:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:16.093 14:30:52 -- pm/common@44 -- $ pid=1298569 00:02:16.093 14:30:52 -- pm/common@50 -- $ kill -TERM 1298569 00:02:16.093 14:30:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.093 14:30:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:16.093 14:30:52 -- pm/common@44 -- $ pid=1298591 00:02:16.093 14:30:52 -- pm/common@50 -- $ sudo -E kill -TERM 1298591 00:02:16.093 14:30:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:16.093 14:30:52 -- nvmf/common.sh@7 -- # uname -s 00:02:16.093 14:30:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:16.093 14:30:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:16.093 14:30:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:16.093 14:30:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:16.093 14:30:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:16.093 14:30:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:16.093 14:30:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:16.093 14:30:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:16.093 14:30:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:16.093 14:30:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:16.093 14:30:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:02:16.093 14:30:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:02:16.093 14:30:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:16.093 14:30:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:16.093 14:30:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:16.093 14:30:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:16.093 14:30:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:16.093 14:30:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:16.093 14:30:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.093 14:30:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.093 14:30:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.093 14:30:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.093 14:30:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.093 14:30:52 -- paths/export.sh@5 -- # export PATH 00:02:16.093 14:30:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.093 14:30:52 -- nvmf/common.sh@47 -- # : 0 00:02:16.093 14:30:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:16.093 14:30:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:16.093 14:30:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:16.093 14:30:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:16.093 14:30:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:16.093 14:30:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:16.093 14:30:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:16.093 14:30:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:16.093 14:30:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:16.093 14:30:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:16.093 14:30:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:16.093 14:30:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:16.093 14:30:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:16.093 14:30:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:16.094 14:30:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:16.094 14:30:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:16.094 14:30:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:16.094 14:30:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:16.094 14:30:52 -- spdk/autotest.sh@48 -- # udevadm_pid=1357226 00:02:16.094 14:30:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:16.094 14:30:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:16.094 14:30:52 -- pm/common@17 -- # local monitor 00:02:16.094 14:30:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.094 14:30:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.094 14:30:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.094 14:30:52 -- pm/common@21 -- # date +%s 00:02:16.094 14:30:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.094 14:30:52 -- pm/common@21 -- # date +%s 00:02:16.094 14:30:52 -- pm/common@25 -- # sleep 1 00:02:16.094 14:30:52 -- pm/common@21 -- # date +%s 00:02:16.094 14:30:52 -- pm/common@21 -- # date +%s 00:02:16.094 14:30:52 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720787452 00:02:16.094 14:30:52 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720787452 00:02:16.094 14:30:52 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720787452 00:02:16.094 14:30:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720787452 00:02:16.353 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720787452_collect-vmstat.pm.log 00:02:16.353 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720787452_collect-cpu-load.pm.log 00:02:16.353 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720787452_collect-cpu-temp.pm.log 00:02:16.353 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720787452_collect-bmc-pm.bmc.pm.log 00:02:17.289 14:30:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:17.289 14:30:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:17.289 14:30:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:17.289 14:30:53 -- common/autotest_common.sh@10 -- # set +x 00:02:17.289 14:30:53 -- spdk/autotest.sh@59 -- # create_test_list 00:02:17.289 14:30:53 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:17.289 14:30:53 -- common/autotest_common.sh@10 -- # set +x 00:02:17.289 14:30:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:17.289 14:30:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.289 14:30:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.289 14:30:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:17.289 14:30:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:17.289 14:30:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:17.289 14:30:53 -- common/autotest_common.sh@1455 -- # uname 00:02:17.289 14:30:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:17.289 14:30:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:17.289 14:30:53 -- common/autotest_common.sh@1475 -- # uname 00:02:17.289 14:30:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:17.290 14:30:53 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:17.290 14:30:53 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:17.290 14:30:53 -- spdk/autotest.sh@72 -- # hash lcov 00:02:17.290 14:30:53 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:17.290 14:30:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:17.290 14:30:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:17.290 14:30:53 -- common/autotest_common.sh@10 -- # set +x 00:02:17.290 14:30:53 -- spdk/autotest.sh@91 -- # rm -f 00:02:17.290 14:30:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.546 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:02:21.546 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:21.546 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:23.451 14:30:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:23.451 14:30:59 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:23.451 14:30:59 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:23.451 14:30:59 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:23.451 14:30:59 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:23.451 14:30:59 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:23.451 14:30:59 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:23.451 14:30:59 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.451 14:30:59 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:23.451 14:30:59 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:23.451 14:30:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:23.451 14:30:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:23.451 14:30:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:23.451 14:30:59 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:23.451 14:30:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:23.451 No valid GPT data, bailing 00:02:23.451 14:30:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:23.451 14:30:59 -- scripts/common.sh@391 -- # pt= 00:02:23.451 14:30:59 -- scripts/common.sh@392 -- # return 1 00:02:23.451 14:30:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:23.451 1+0 records in 00:02:23.451 1+0 records out 00:02:23.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00648301 s, 162 MB/s 00:02:23.451 14:30:59 -- spdk/autotest.sh@118 -- # sync 00:02:23.451 14:30:59 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:23.451 14:30:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:23.451 14:30:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:28.725 14:31:04 -- spdk/autotest.sh@124 -- # uname -s 00:02:28.725 14:31:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:28.725 14:31:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.725 14:31:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:28.725 14:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:28.725 14:31:04 -- common/autotest_common.sh@10 -- # set +x 00:02:28.725 ************************************ 00:02:28.725 START TEST setup.sh 00:02:28.725 ************************************ 00:02:28.726 14:31:04 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.726 * Looking for test storage... 00:02:28.726 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:28.726 14:31:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:28.726 14:31:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:28.726 14:31:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:28.726 14:31:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:28.726 14:31:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:28.726 14:31:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:28.726 ************************************ 00:02:28.726 START TEST acl 00:02:28.726 ************************************ 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:28.726 * Looking for test storage... 00:02:28.726 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:28.726 14:31:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:28.726 14:31:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:28.726 14:31:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:28.726 14:31:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:28.726 14:31:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:28.726 14:31:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:28.726 14:31:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:28.726 14:31:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.726 14:31:05 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.290 14:31:11 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:35.290 14:31:11 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:35.290 14:31:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.290 14:31:11 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:35.290 14:31:11 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.290 14:31:11 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:38.582 Hugepages 00:02:38.582 node hugesize free / total 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 00:02:38.582 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.582 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:38.583 14:31:14 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:38.583 14:31:14 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.583 14:31:14 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.583 14:31:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:38.583 ************************************ 00:02:38.583 START TEST denied 00:02:38.583 ************************************ 00:02:38.583 14:31:14 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:38.583 14:31:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:02:38.583 14:31:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:38.583 14:31:14 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:02:38.583 14:31:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.583 14:31:14 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:45.148 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.148 14:31:20 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.714 00:02:51.714 real 0m12.620s 00:02:51.714 user 0m4.106s 00:02:51.714 sys 0m7.737s 00:02:51.714 14:31:27 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:51.714 14:31:27 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:51.714 ************************************ 00:02:51.714 END TEST denied 00:02:51.714 ************************************ 00:02:51.714 14:31:27 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:51.714 14:31:27 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:51.714 14:31:27 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.714 14:31:27 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.714 14:31:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:51.714 ************************************ 00:02:51.714 START TEST allowed 00:02:51.714 ************************************ 00:02:51.714 14:31:27 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:51.714 14:31:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:02:51.714 14:31:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:51.714 14:31:27 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:02:51.714 14:31:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.714 14:31:27 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:59.836 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:02:59.836 14:31:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:59.836 14:31:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:59.837 14:31:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:59.837 14:31:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.837 14:31:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.502 00:03:06.502 real 0m14.905s 00:03:06.502 user 0m3.973s 00:03:06.502 sys 0m7.696s 00:03:06.502 14:31:42 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.502 14:31:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:06.502 ************************************ 00:03:06.502 END TEST allowed 00:03:06.502 ************************************ 00:03:06.502 14:31:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:06.502 00:03:06.502 real 0m37.507s 00:03:06.502 user 0m11.658s 00:03:06.502 sys 0m22.103s 00:03:06.502 14:31:42 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.502 14:31:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.502 ************************************ 00:03:06.502 END TEST acl 00:03:06.502 ************************************ 00:03:06.502 14:31:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:06.502 14:31:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.502 14:31:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.502 14:31:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.502 14:31:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.502 ************************************ 00:03:06.502 START TEST hugepages 00:03:06.502 ************************************ 00:03:06.502 14:31:42 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.502 * Looking for test storage... 00:03:06.502 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74043348 kB' 'MemAvailable: 77440968 kB' 'Buffers: 2696 kB' 'Cached: 10997668 kB' 'SwapCached: 0 kB' 'Active: 7979028 kB' 'Inactive: 3492336 kB' 'Active(anon): 7513452 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474232 kB' 'Mapped: 195028 kB' 'Shmem: 7042452 kB' 'KReclaimable: 186228 kB' 'Slab: 527304 kB' 'SReclaimable: 186228 kB' 'SUnreclaim: 341076 kB' 'KernelStack: 16304 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 8936528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211848 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.502 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:06.503 14:31:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:06.503 14:31:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.503 14:31:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.503 14:31:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.503 ************************************ 00:03:06.503 START TEST default_setup 00:03:06.503 ************************************ 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.503 14:31:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:09.793 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:09.793 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:09.793 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:09.793 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:09.793 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:09.793 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:10.053 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:13.344 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76207908 kB' 'MemAvailable: 79605440 kB' 'Buffers: 2696 kB' 'Cached: 10997840 kB' 'SwapCached: 0 kB' 'Active: 7990088 kB' 'Inactive: 3492336 kB' 'Active(anon): 7524512 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485172 kB' 'Mapped: 195004 kB' 'Shmem: 7042624 kB' 'KReclaimable: 186052 kB' 'Slab: 525368 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339316 kB' 'KernelStack: 16160 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8950312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.256 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.257 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76207712 kB' 'MemAvailable: 79605244 kB' 'Buffers: 2696 kB' 'Cached: 10997844 kB' 'SwapCached: 0 kB' 'Active: 7990316 kB' 'Inactive: 3492336 kB' 'Active(anon): 7524740 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485476 kB' 'Mapped: 194952 kB' 'Shmem: 7042628 kB' 'KReclaimable: 186052 kB' 'Slab: 525420 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339368 kB' 'KernelStack: 16176 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8950328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.258 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76210500 kB' 'MemAvailable: 79608032 kB' 'Buffers: 2696 kB' 'Cached: 10997864 kB' 'SwapCached: 0 kB' 'Active: 7990628 kB' 'Inactive: 3492336 kB' 'Active(anon): 7525052 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485812 kB' 'Mapped: 194952 kB' 'Shmem: 7042648 kB' 'KReclaimable: 186052 kB' 'Slab: 525420 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339368 kB' 'KernelStack: 16176 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8950348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.259 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.260 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.261 nr_hugepages=1024 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.261 resv_hugepages=0 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.261 surplus_hugepages=0 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.261 anon_hugepages=0 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76212352 kB' 'MemAvailable: 79609884 kB' 'Buffers: 2696 kB' 'Cached: 10997884 kB' 'SwapCached: 0 kB' 'Active: 7990336 kB' 'Inactive: 3492336 kB' 'Active(anon): 7524760 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485476 kB' 'Mapped: 194952 kB' 'Shmem: 7042668 kB' 'KReclaimable: 186052 kB' 'Slab: 525420 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339368 kB' 'KernelStack: 16176 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8950372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.261 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.262 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41505964 kB' 'MemUsed: 6563948 kB' 'SwapCached: 0 kB' 'Active: 2561548 kB' 'Inactive: 82840 kB' 'Active(anon): 2271296 kB' 'Inactive(anon): 0 kB' 'Active(file): 290252 kB' 'Inactive(file): 82840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2385276 kB' 'Mapped: 118156 kB' 'AnonPages: 262264 kB' 'Shmem: 2012184 kB' 'KernelStack: 7912 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63152 kB' 'Slab: 236960 kB' 'SReclaimable: 63152 kB' 'SUnreclaim: 173808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.263 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.264 node0=1024 expecting 1024 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.264 00:03:15.264 real 0m8.967s 00:03:15.264 user 0m2.181s 00:03:15.264 sys 0m3.787s 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.264 14:31:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:15.264 ************************************ 00:03:15.264 END TEST default_setup 00:03:15.264 ************************************ 00:03:15.264 14:31:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.264 14:31:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:15.264 14:31:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.264 14:31:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.265 14:31:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.265 ************************************ 00:03:15.265 START TEST per_node_1G_alloc 00:03:15.265 ************************************ 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.265 14:31:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:19.458 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.458 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.458 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.837 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76203960 kB' 'MemAvailable: 79601492 kB' 'Buffers: 2696 kB' 'Cached: 10998012 kB' 'SwapCached: 0 kB' 'Active: 7989040 kB' 'Inactive: 3492336 kB' 'Active(anon): 7523464 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483908 kB' 'Mapped: 194124 kB' 'Shmem: 7042796 kB' 'KReclaimable: 186052 kB' 'Slab: 525080 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339028 kB' 'KernelStack: 16144 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8941552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211784 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.838 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76205436 kB' 'MemAvailable: 79602968 kB' 'Buffers: 2696 kB' 'Cached: 10998016 kB' 'SwapCached: 0 kB' 'Active: 7989108 kB' 'Inactive: 3492336 kB' 'Active(anon): 7523532 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484020 kB' 'Mapped: 194124 kB' 'Shmem: 7042800 kB' 'KReclaimable: 186052 kB' 'Slab: 525064 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339012 kB' 'KernelStack: 16144 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8941568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211768 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.839 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.840 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76209020 kB' 'MemAvailable: 79606552 kB' 'Buffers: 2696 kB' 'Cached: 10998016 kB' 'SwapCached: 0 kB' 'Active: 7989148 kB' 'Inactive: 3492336 kB' 'Active(anon): 7523572 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484052 kB' 'Mapped: 194124 kB' 'Shmem: 7042800 kB' 'KReclaimable: 186052 kB' 'Slab: 525064 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339012 kB' 'KernelStack: 16160 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8941592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211768 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.841 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.842 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.843 nr_hugepages=1024 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.843 resv_hugepages=0 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.843 surplus_hugepages=0 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.843 anon_hugepages=0 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.843 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76207764 kB' 'MemAvailable: 79605296 kB' 'Buffers: 2696 kB' 'Cached: 10998076 kB' 'SwapCached: 0 kB' 'Active: 7988788 kB' 'Inactive: 3492336 kB' 'Active(anon): 7523212 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483616 kB' 'Mapped: 194124 kB' 'Shmem: 7042860 kB' 'KReclaimable: 186052 kB' 'Slab: 525064 kB' 'SReclaimable: 186052 kB' 'SUnreclaim: 339012 kB' 'KernelStack: 16128 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8941612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211768 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.844 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.107 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42559116 kB' 'MemUsed: 5510796 kB' 'SwapCached: 0 kB' 'Active: 2559932 kB' 'Inactive: 82840 kB' 'Active(anon): 2269680 kB' 'Inactive(anon): 0 kB' 'Active(file): 290252 kB' 'Inactive(file): 82840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2385368 kB' 'Mapped: 117360 kB' 'AnonPages: 260556 kB' 'Shmem: 2012276 kB' 'KernelStack: 7848 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63152 kB' 'Slab: 236712 kB' 'SReclaimable: 63152 kB' 'SUnreclaim: 173560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33649560 kB' 'MemUsed: 10574060 kB' 'SwapCached: 0 kB' 'Active: 5429092 kB' 'Inactive: 3409496 kB' 'Active(anon): 5253768 kB' 'Inactive(anon): 0 kB' 'Active(file): 175324 kB' 'Inactive(file): 3409496 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8615408 kB' 'Mapped: 76764 kB' 'AnonPages: 223324 kB' 'Shmem: 5030588 kB' 'KernelStack: 8280 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122900 kB' 'Slab: 288352 kB' 'SReclaimable: 122900 kB' 'SUnreclaim: 165452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.110 node0=512 expecting 512 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.110 node1=512 expecting 512 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.110 00:03:21.110 real 0m5.703s 00:03:21.110 user 0m2.105s 00:03:21.110 sys 0m3.660s 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.110 14:31:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.110 ************************************ 00:03:21.110 END TEST per_node_1G_alloc 00:03:21.110 ************************************ 00:03:21.110 14:31:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:21.110 14:31:57 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:21.110 14:31:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.110 14:31:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.110 14:31:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.110 ************************************ 00:03:21.110 START TEST even_2G_alloc 00:03:21.110 ************************************ 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.110 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.111 14:31:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:25.301 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.301 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.301 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76225400 kB' 'MemAvailable: 79622884 kB' 'Buffers: 2696 kB' 'Cached: 10998196 kB' 'SwapCached: 0 kB' 'Active: 7990856 kB' 'Inactive: 3492336 kB' 'Active(anon): 7525280 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485120 kB' 'Mapped: 194312 kB' 'Shmem: 7042980 kB' 'KReclaimable: 185956 kB' 'Slab: 526116 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 340160 kB' 'KernelStack: 16144 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8942248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211864 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.680 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76225124 kB' 'MemAvailable: 79622608 kB' 'Buffers: 2696 kB' 'Cached: 10998216 kB' 'SwapCached: 0 kB' 'Active: 7989724 kB' 'Inactive: 3492336 kB' 'Active(anon): 7524148 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484468 kB' 'Mapped: 194180 kB' 'Shmem: 7043000 kB' 'KReclaimable: 185956 kB' 'Slab: 526108 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 340152 kB' 'KernelStack: 16128 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8942264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211832 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.681 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76225124 kB' 'MemAvailable: 79622608 kB' 'Buffers: 2696 kB' 'Cached: 10998220 kB' 'SwapCached: 0 kB' 'Active: 7990156 kB' 'Inactive: 3492336 kB' 'Active(anon): 7524580 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484852 kB' 'Mapped: 194180 kB' 'Shmem: 7043004 kB' 'KReclaimable: 185956 kB' 'Slab: 526108 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 340152 kB' 'KernelStack: 16144 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8942288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211832 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.682 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.683 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.684 nr_hugepages=1024 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.684 resv_hugepages=0 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.684 surplus_hugepages=0 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.684 anon_hugepages=0 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76225124 kB' 'MemAvailable: 79622608 kB' 'Buffers: 2696 kB' 'Cached: 10998240 kB' 'SwapCached: 0 kB' 'Active: 7990168 kB' 'Inactive: 3492336 kB' 'Active(anon): 7524592 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484852 kB' 'Mapped: 194180 kB' 'Shmem: 7043024 kB' 'KReclaimable: 185956 kB' 'Slab: 526108 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 340152 kB' 'KernelStack: 16144 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8942308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211832 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.684 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.685 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42558512 kB' 'MemUsed: 5511400 kB' 'SwapCached: 0 kB' 'Active: 2559728 kB' 'Inactive: 82840 kB' 'Active(anon): 2269476 kB' 'Inactive(anon): 0 kB' 'Active(file): 290252 kB' 'Inactive(file): 82840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2385424 kB' 'Mapped: 117416 kB' 'AnonPages: 260304 kB' 'Shmem: 2012332 kB' 'KernelStack: 7864 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63152 kB' 'Slab: 236740 kB' 'SReclaimable: 63152 kB' 'SUnreclaim: 173588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.686 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33665856 kB' 'MemUsed: 10557764 kB' 'SwapCached: 0 kB' 'Active: 5430816 kB' 'Inactive: 3409496 kB' 'Active(anon): 5255492 kB' 'Inactive(anon): 0 kB' 'Active(file): 175324 kB' 'Inactive(file): 3409496 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8615556 kB' 'Mapped: 76764 kB' 'AnonPages: 224868 kB' 'Shmem: 5030736 kB' 'KernelStack: 8296 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122804 kB' 'Slab: 289368 kB' 'SReclaimable: 122804 kB' 'SUnreclaim: 166564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.946 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.947 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.948 node0=512 expecting 512 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.948 node1=512 expecting 512 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.948 00:03:26.948 real 0m5.717s 00:03:26.948 user 0m2.082s 00:03:26.948 sys 0m3.702s 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.948 14:32:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.948 ************************************ 00:03:26.948 END TEST even_2G_alloc 00:03:26.948 ************************************ 00:03:26.948 14:32:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.948 14:32:03 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:26.948 14:32:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.948 14:32:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.948 14:32:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.948 ************************************ 00:03:26.948 START TEST odd_alloc 00:03:26.948 ************************************ 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.948 14:32:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:31.141 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.141 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.141 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76210636 kB' 'MemAvailable: 79608120 kB' 'Buffers: 2696 kB' 'Cached: 10998392 kB' 'SwapCached: 0 kB' 'Active: 7992596 kB' 'Inactive: 3492336 kB' 'Active(anon): 7527020 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486588 kB' 'Mapped: 194344 kB' 'Shmem: 7043176 kB' 'KReclaimable: 185956 kB' 'Slab: 525180 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 339224 kB' 'KernelStack: 16368 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8944072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211880 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76212336 kB' 'MemAvailable: 79609820 kB' 'Buffers: 2696 kB' 'Cached: 10998396 kB' 'SwapCached: 0 kB' 'Active: 7991256 kB' 'Inactive: 3492336 kB' 'Active(anon): 7525680 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485712 kB' 'Mapped: 194248 kB' 'Shmem: 7043180 kB' 'KReclaimable: 185956 kB' 'Slab: 525156 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 339200 kB' 'KernelStack: 16224 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8945572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211880 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.525 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76211644 kB' 'MemAvailable: 79609128 kB' 'Buffers: 2696 kB' 'Cached: 10998412 kB' 'SwapCached: 0 kB' 'Active: 7991468 kB' 'Inactive: 3492336 kB' 'Active(anon): 7525892 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485904 kB' 'Mapped: 194248 kB' 'Shmem: 7043196 kB' 'KReclaimable: 185956 kB' 'Slab: 525156 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 339200 kB' 'KernelStack: 16144 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8945592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211880 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.526 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.527 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:32.528 nr_hugepages=1025 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.528 resv_hugepages=0 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.528 surplus_hugepages=0 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.528 anon_hugepages=0 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76211436 kB' 'MemAvailable: 79608920 kB' 'Buffers: 2696 kB' 'Cached: 10998412 kB' 'SwapCached: 0 kB' 'Active: 7991804 kB' 'Inactive: 3492336 kB' 'Active(anon): 7526228 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486188 kB' 'Mapped: 194240 kB' 'Shmem: 7043196 kB' 'KReclaimable: 185956 kB' 'Slab: 525092 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 339136 kB' 'KernelStack: 16224 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 8945612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211960 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.528 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42551148 kB' 'MemUsed: 5518764 kB' 'SwapCached: 0 kB' 'Active: 2560292 kB' 'Inactive: 82840 kB' 'Active(anon): 2270040 kB' 'Inactive(anon): 0 kB' 'Active(file): 290252 kB' 'Inactive(file): 82840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2385500 kB' 'Mapped: 117484 kB' 'AnonPages: 260764 kB' 'Shmem: 2012408 kB' 'KernelStack: 7928 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63152 kB' 'Slab: 236744 kB' 'SReclaimable: 63152 kB' 'SUnreclaim: 173592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33657676 kB' 'MemUsed: 10565944 kB' 'SwapCached: 0 kB' 'Active: 5431144 kB' 'Inactive: 3409496 kB' 'Active(anon): 5255820 kB' 'Inactive(anon): 0 kB' 'Active(file): 175324 kB' 'Inactive(file): 3409496 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8615652 kB' 'Mapped: 76764 kB' 'AnonPages: 225008 kB' 'Shmem: 5030832 kB' 'KernelStack: 8376 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122804 kB' 'Slab: 288348 kB' 'SReclaimable: 122804 kB' 'SUnreclaim: 165544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.532 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:32.533 node0=512 expecting 513 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:32.533 node1=513 expecting 512 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:32.533 00:03:32.533 real 0m5.707s 00:03:32.533 user 0m2.097s 00:03:32.533 sys 0m3.669s 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.533 14:32:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.533 ************************************ 00:03:32.533 END TEST odd_alloc 00:03:32.533 ************************************ 00:03:32.793 14:32:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:32.793 14:32:09 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:32.793 14:32:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.793 14:32:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.793 14:32:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.793 ************************************ 00:03:32.793 START TEST custom_alloc 00:03:32.793 ************************************ 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.793 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.794 14:32:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:36.083 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.083 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.083 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.341 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.341 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.341 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.341 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.341 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.341 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.246 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75188240 kB' 'MemAvailable: 78585720 kB' 'Buffers: 2696 kB' 'Cached: 10998580 kB' 'SwapCached: 0 kB' 'Active: 7992440 kB' 'Inactive: 3492336 kB' 'Active(anon): 7526864 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486876 kB' 'Mapped: 194456 kB' 'Shmem: 7043364 kB' 'KReclaimable: 185948 kB' 'Slab: 525764 kB' 'SReclaimable: 185948 kB' 'SUnreclaim: 339816 kB' 'KernelStack: 16144 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8943592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211880 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.247 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75188848 kB' 'MemAvailable: 78586328 kB' 'Buffers: 2696 kB' 'Cached: 10998580 kB' 'SwapCached: 0 kB' 'Active: 7991760 kB' 'Inactive: 3492336 kB' 'Active(anon): 7526184 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486108 kB' 'Mapped: 194328 kB' 'Shmem: 7043364 kB' 'KReclaimable: 185948 kB' 'Slab: 525884 kB' 'SReclaimable: 185948 kB' 'SUnreclaim: 339936 kB' 'KernelStack: 16128 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8943608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211848 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.248 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.249 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75188344 kB' 'MemAvailable: 78585824 kB' 'Buffers: 2696 kB' 'Cached: 10998612 kB' 'SwapCached: 0 kB' 'Active: 7992292 kB' 'Inactive: 3492336 kB' 'Active(anon): 7526716 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486672 kB' 'Mapped: 194328 kB' 'Shmem: 7043396 kB' 'KReclaimable: 185948 kB' 'Slab: 525856 kB' 'SReclaimable: 185948 kB' 'SUnreclaim: 339908 kB' 'KernelStack: 16176 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8944128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211864 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.250 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.251 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:38.252 nr_hugepages=1536 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.252 resv_hugepages=0 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.252 surplus_hugepages=0 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.252 anon_hugepages=0 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75188344 kB' 'MemAvailable: 78585824 kB' 'Buffers: 2696 kB' 'Cached: 10998632 kB' 'SwapCached: 0 kB' 'Active: 7992304 kB' 'Inactive: 3492336 kB' 'Active(anon): 7526728 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486688 kB' 'Mapped: 194328 kB' 'Shmem: 7043416 kB' 'KReclaimable: 185948 kB' 'Slab: 525856 kB' 'SReclaimable: 185948 kB' 'SUnreclaim: 339908 kB' 'KernelStack: 16160 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 8944152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211864 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.252 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42560416 kB' 'MemUsed: 5509496 kB' 'SwapCached: 0 kB' 'Active: 2560744 kB' 'Inactive: 82840 kB' 'Active(anon): 2270492 kB' 'Inactive(anon): 0 kB' 'Active(file): 290252 kB' 'Inactive(file): 82840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2385652 kB' 'Mapped: 117564 kB' 'AnonPages: 261100 kB' 'Shmem: 2012560 kB' 'KernelStack: 7912 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63144 kB' 'Slab: 236824 kB' 'SReclaimable: 63144 kB' 'SUnreclaim: 173680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.254 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.515 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 32629052 kB' 'MemUsed: 11594568 kB' 'SwapCached: 0 kB' 'Active: 5431716 kB' 'Inactive: 3409496 kB' 'Active(anon): 5256392 kB' 'Inactive(anon): 0 kB' 'Active(file): 175324 kB' 'Inactive(file): 3409496 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8615716 kB' 'Mapped: 76764 kB' 'AnonPages: 225704 kB' 'Shmem: 5030896 kB' 'KernelStack: 8280 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122804 kB' 'Slab: 289032 kB' 'SReclaimable: 122804 kB' 'SUnreclaim: 166228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.516 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.517 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.518 node0=512 expecting 512 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:38.518 node1=1024 expecting 1024 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:38.518 00:03:38.518 real 0m5.710s 00:03:38.518 user 0m2.034s 00:03:38.518 sys 0m3.742s 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.518 14:32:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.518 ************************************ 00:03:38.518 END TEST custom_alloc 00:03:38.518 ************************************ 00:03:38.518 14:32:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:38.518 14:32:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:38.518 14:32:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.518 14:32:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.518 14:32:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.518 ************************************ 00:03:38.518 START TEST no_shrink_alloc 00:03:38.518 ************************************ 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.518 14:32:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:42.712 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:42.712 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.712 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.092 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76218620 kB' 'MemAvailable: 79616084 kB' 'Buffers: 2696 kB' 'Cached: 10998776 kB' 'SwapCached: 0 kB' 'Active: 7994152 kB' 'Inactive: 3492336 kB' 'Active(anon): 7528576 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488464 kB' 'Mapped: 194644 kB' 'Shmem: 7043560 kB' 'KReclaimable: 185916 kB' 'Slab: 525540 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339624 kB' 'KernelStack: 16304 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8944996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211736 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76218352 kB' 'MemAvailable: 79615816 kB' 'Buffers: 2696 kB' 'Cached: 10998780 kB' 'SwapCached: 0 kB' 'Active: 7994204 kB' 'Inactive: 3492336 kB' 'Active(anon): 7528628 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488488 kB' 'Mapped: 194460 kB' 'Shmem: 7043564 kB' 'KReclaimable: 185916 kB' 'Slab: 525476 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339560 kB' 'KernelStack: 16288 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8945016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211720 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.095 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76218616 kB' 'MemAvailable: 79616080 kB' 'Buffers: 2696 kB' 'Cached: 10998796 kB' 'SwapCached: 0 kB' 'Active: 7994264 kB' 'Inactive: 3492336 kB' 'Active(anon): 7528688 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488492 kB' 'Mapped: 194460 kB' 'Shmem: 7043580 kB' 'KReclaimable: 185916 kB' 'Slab: 525476 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339560 kB' 'KernelStack: 16288 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8945036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211704 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.098 nr_hugepages=1024 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.098 resv_hugepages=0 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.098 surplus_hugepages=0 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.098 anon_hugepages=0 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76217608 kB' 'MemAvailable: 79615072 kB' 'Buffers: 2696 kB' 'Cached: 10998820 kB' 'SwapCached: 0 kB' 'Active: 7994312 kB' 'Inactive: 3492336 kB' 'Active(anon): 7528736 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488492 kB' 'Mapped: 194460 kB' 'Shmem: 7043604 kB' 'KReclaimable: 185916 kB' 'Slab: 525476 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339560 kB' 'KernelStack: 16288 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8945060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211704 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.099 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41507252 kB' 'MemUsed: 6562660 kB' 'SwapCached: 0 kB' 'Active: 2562860 kB' 'Inactive: 82840 kB' 'Active(anon): 2272608 kB' 'Inactive(anon): 0 kB' 'Active(file): 290252 kB' 'Inactive(file): 82840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2385784 kB' 'Mapped: 117636 kB' 'AnonPages: 263164 kB' 'Shmem: 2012692 kB' 'KernelStack: 8024 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63112 kB' 'Slab: 236748 kB' 'SReclaimable: 63112 kB' 'SUnreclaim: 173636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.100 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.101 node0=1024 expecting 1024 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.101 14:32:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:48.290 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.290 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.290 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.667 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76224460 kB' 'MemAvailable: 79621924 kB' 'Buffers: 2696 kB' 'Cached: 10998944 kB' 'SwapCached: 0 kB' 'Active: 7994696 kB' 'Inactive: 3492336 kB' 'Active(anon): 7529120 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488680 kB' 'Mapped: 194944 kB' 'Shmem: 7043728 kB' 'KReclaimable: 185916 kB' 'Slab: 525876 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339960 kB' 'KernelStack: 16224 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8945808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211864 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76225148 kB' 'MemAvailable: 79622612 kB' 'Buffers: 2696 kB' 'Cached: 10998948 kB' 'SwapCached: 0 kB' 'Active: 7994808 kB' 'Inactive: 3492336 kB' 'Active(anon): 7529232 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488844 kB' 'Mapped: 194876 kB' 'Shmem: 7043732 kB' 'KReclaimable: 185916 kB' 'Slab: 525868 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339952 kB' 'KernelStack: 16208 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8945824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211816 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76223520 kB' 'MemAvailable: 79620984 kB' 'Buffers: 2696 kB' 'Cached: 10998964 kB' 'SwapCached: 0 kB' 'Active: 7995120 kB' 'Inactive: 3492336 kB' 'Active(anon): 7529544 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489172 kB' 'Mapped: 194876 kB' 'Shmem: 7043748 kB' 'KReclaimable: 185916 kB' 'Slab: 525816 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339900 kB' 'KernelStack: 16208 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8945848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211816 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.673 nr_hugepages=1024 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.673 resv_hugepages=0 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.673 surplus_hugepages=0 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.673 anon_hugepages=0 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 76223520 kB' 'MemAvailable: 79620984 kB' 'Buffers: 2696 kB' 'Cached: 10998984 kB' 'SwapCached: 0 kB' 'Active: 7994748 kB' 'Inactive: 3492336 kB' 'Active(anon): 7529172 kB' 'Inactive(anon): 0 kB' 'Active(file): 465576 kB' 'Inactive(file): 3492336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488740 kB' 'Mapped: 194876 kB' 'Shmem: 7043768 kB' 'KReclaimable: 185916 kB' 'Slab: 525816 kB' 'SReclaimable: 185916 kB' 'SUnreclaim: 339900 kB' 'KernelStack: 16192 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 8945872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211816 kB' 'VmallocChunk: 0 kB' 'Percpu: 50880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41495960 kB' 'MemUsed: 6573952 kB' 'SwapCached: 0 kB' 'Active: 2563156 kB' 'Inactive: 82840 kB' 'Active(anon): 2272904 kB' 'Inactive(anon): 0 kB' 'Active(file): 290252 kB' 'Inactive(file): 82840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2385868 kB' 'Mapped: 117700 kB' 'AnonPages: 262756 kB' 'Shmem: 2012776 kB' 'KernelStack: 7960 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63112 kB' 'Slab: 237192 kB' 'SReclaimable: 63112 kB' 'SUnreclaim: 174080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.676 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.677 node0=1024 expecting 1024 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.677 00:03:49.677 real 0m11.283s 00:03:49.677 user 0m4.059s 00:03:49.677 sys 0m7.380s 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.677 14:32:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.677 ************************************ 00:03:49.677 END TEST no_shrink_alloc 00:03:49.677 ************************************ 00:03:49.935 14:32:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:49.935 14:32:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:49.935 00:03:49.935 real 0m43.766s 00:03:49.935 user 0m14.811s 00:03:49.935 sys 0m26.415s 00:03:49.935 14:32:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.935 14:32:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.935 ************************************ 00:03:49.935 END TEST hugepages 00:03:49.935 ************************************ 00:03:49.935 14:32:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:49.935 14:32:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:49.935 14:32:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.935 14:32:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.935 14:32:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.935 ************************************ 00:03:49.935 START TEST driver 00:03:49.935 ************************************ 00:03:49.935 14:32:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:49.935 * Looking for test storage... 00:03:49.935 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:49.935 14:32:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:49.935 14:32:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.935 14:32:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.052 14:32:33 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:58.052 14:32:33 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.052 14:32:33 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.052 14:32:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.052 ************************************ 00:03:58.052 START TEST guess_driver 00:03:58.052 ************************************ 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:58.053 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.053 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.053 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.053 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.053 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:58.053 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:58.053 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:58.053 Looking for driver=vfio-pci 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.053 14:32:33 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.585 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.586 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.586 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.845 14:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.133 14:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.133 14:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.133 14:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.037 14:32:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:06.037 14:32:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:06.037 14:32:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.037 14:32:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.205 00:04:14.205 real 0m15.927s 00:04:14.205 user 0m3.986s 00:04:14.205 sys 0m7.931s 00:04:14.205 14:32:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.205 14:32:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.205 ************************************ 00:04:14.205 END TEST guess_driver 00:04:14.205 ************************************ 00:04:14.205 14:32:49 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:14.205 00:04:14.205 real 0m23.089s 00:04:14.205 user 0m6.055s 00:04:14.205 sys 0m12.216s 00:04:14.205 14:32:49 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.205 14:32:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.205 ************************************ 00:04:14.205 END TEST driver 00:04:14.205 ************************************ 00:04:14.205 14:32:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.205 14:32:49 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:14.205 14:32:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.205 14:32:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.205 14:32:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.205 ************************************ 00:04:14.205 START TEST devices 00:04:14.205 ************************************ 00:04:14.205 14:32:49 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:14.205 * Looking for test storage... 00:04:14.205 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:14.205 14:32:49 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:14.205 14:32:49 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:14.205 14:32:49 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.205 14:32:49 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:19.481 14:32:55 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.481 No valid GPT data, bailing 00:04:19.481 14:32:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.481 14:32:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.481 14:32:55 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.481 14:32:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.481 14:32:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.481 ************************************ 00:04:19.481 START TEST nvme_mount 00:04:19.481 ************************************ 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.481 14:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:20.419 Creating new GPT entries in memory. 00:04:20.419 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.419 other utilities. 00:04:20.419 14:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.419 14:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.419 14:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.419 14:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.419 14:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:21.356 Creating new GPT entries in memory. 00:04:21.356 The operation has completed successfully. 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1388621 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:21.356 14:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.356 14:32:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:25.574 14:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.949 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.950 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.950 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.210 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:27.210 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:27.210 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.210 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.210 14:33:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:31.402 14:33:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.793 14:33:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:36.987 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.987 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:36.987 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.987 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:36.988 14:33:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.362 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.363 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.363 00:04:38.363 real 0m19.054s 00:04:38.363 user 0m5.697s 00:04:38.363 sys 0m11.220s 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.363 14:33:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.363 ************************************ 00:04:38.363 END TEST nvme_mount 00:04:38.363 ************************************ 00:04:38.363 14:33:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:38.363 14:33:14 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:38.363 14:33:14 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.363 14:33:14 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.363 14:33:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.363 ************************************ 00:04:38.363 START TEST dm_mount 00:04:38.363 ************************************ 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.363 14:33:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.301 Creating new GPT entries in memory. 00:04:39.302 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.302 other utilities. 00:04:39.302 14:33:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.302 14:33:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.302 14:33:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.302 14:33:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.302 14:33:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.680 Creating new GPT entries in memory. 00:04:40.680 The operation has completed successfully. 00:04:40.680 14:33:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.680 14:33:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.680 14:33:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.680 14:33:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.680 14:33:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.618 The operation has completed successfully. 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1393955 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.618 14:33:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:44.971 14:33:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.512 14:33:23 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:04:50.832 14:33:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:52.737 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:52.737 00:04:52.737 real 0m14.259s 00:04:52.737 user 0m3.831s 00:04:52.737 sys 0m7.481s 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.737 14:33:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.737 ************************************ 00:04:52.737 END TEST dm_mount 00:04:52.737 ************************************ 00:04:52.737 14:33:29 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:52.737 14:33:29 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:52.737 14:33:29 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:52.737 14:33:29 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.737 14:33:29 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.737 14:33:29 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.737 14:33:29 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.737 14:33:29 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.997 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.997 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.997 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.997 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.997 14:33:29 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:52.997 14:33:29 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:52.997 14:33:29 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.997 14:33:29 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.997 14:33:29 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.997 14:33:29 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.997 14:33:29 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:52.997 00:04:52.997 real 0m39.884s 00:04:52.997 user 0m11.681s 00:04:52.997 sys 0m23.007s 00:04:52.997 14:33:29 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.997 14:33:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.997 ************************************ 00:04:52.997 END TEST devices 00:04:52.997 ************************************ 00:04:52.997 14:33:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:52.997 00:04:52.997 real 2m24.699s 00:04:52.997 user 0m44.366s 00:04:52.997 sys 1m24.070s 00:04:52.997 14:33:29 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.997 14:33:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.997 ************************************ 00:04:52.997 END TEST setup.sh 00:04:52.997 ************************************ 00:04:52.997 14:33:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.997 14:33:29 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:57.192 Hugepages 00:04:57.192 node hugesize free / total 00:04:57.192 node0 1048576kB 0 / 0 00:04:57.192 node0 2048kB 2048 / 2048 00:04:57.192 node1 1048576kB 0 / 0 00:04:57.192 node1 2048kB 0 / 0 00:04:57.192 00:04:57.192 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:57.192 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:57.192 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:57.192 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:57.192 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:57.192 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:57.192 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:57.192 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:57.192 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:57.192 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:57.192 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:57.192 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:57.192 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:57.192 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:57.192 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:57.192 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:57.192 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:57.192 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:57.192 14:33:33 -- spdk/autotest.sh@130 -- # uname -s 00:04:57.192 14:33:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:57.192 14:33:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:57.192 14:33:33 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:00.484 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.484 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:03.773 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.675 14:33:42 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:06.612 14:33:43 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:06.612 14:33:43 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:06.612 14:33:43 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:06.612 14:33:43 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:06.612 14:33:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:06.612 14:33:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:06.612 14:33:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.612 14:33:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:06.612 14:33:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:06.612 14:33:43 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:06.612 14:33:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:05:06.612 14:33:43 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.864 Waiting for block devices as requested 00:05:10.864 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:05:10.864 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:10.864 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:10.864 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:10.864 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:10.864 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:10.864 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:10.864 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:11.123 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:11.123 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:11.123 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:11.382 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:11.382 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:11.382 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:11.640 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:11.640 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:11.640 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:13.541 14:33:50 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:13.541 14:33:50 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1502 -- # grep 0000:1a:00.0/nvme/nvme 00:05:13.541 14:33:50 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:05:13.541 14:33:50 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:13.541 14:33:50 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:13.541 14:33:50 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:13.541 14:33:50 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:13.541 14:33:50 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:13.541 14:33:50 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:13.541 14:33:50 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:13.541 14:33:50 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:13.541 14:33:50 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:13.541 14:33:50 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:13.541 14:33:50 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:13.541 14:33:50 -- common/autotest_common.sh@1557 -- # continue 00:05:13.541 14:33:50 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:13.541 14:33:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.541 14:33:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.800 14:33:50 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:13.800 14:33:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.800 14:33:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.800 14:33:50 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:17.182 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.182 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.182 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.182 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.182 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.182 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.182 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.441 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.442 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:20.732 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:22.637 14:33:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:22.637 14:33:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.637 14:33:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.637 14:33:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:22.637 14:33:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:22.637 14:33:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:22.637 14:33:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:22.637 14:33:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:22.637 14:33:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:22.637 14:33:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:22.637 14:33:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:22.637 14:33:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.637 14:33:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.637 14:33:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:22.637 14:33:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:22.637 14:33:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:05:22.637 14:33:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:22.637 14:33:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:05:22.637 14:33:59 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:22.637 14:33:59 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:22.637 14:33:59 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:22.637 14:33:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:1a:00.0 00:05:22.637 14:33:59 -- common/autotest_common.sh@1592 -- # [[ -z 0000:1a:00.0 ]] 00:05:22.637 14:33:59 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1404216 00:05:22.637 14:33:59 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.637 14:33:59 -- common/autotest_common.sh@1598 -- # waitforlisten 1404216 00:05:22.637 14:33:59 -- common/autotest_common.sh@829 -- # '[' -z 1404216 ']' 00:05:22.637 14:33:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.637 14:33:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.637 14:33:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.637 14:33:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.637 14:33:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.637 [2024-07-12 14:33:59.377251] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:22.637 [2024-07-12 14:33:59.377324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404216 ] 00:05:22.637 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.897 [2024-07-12 14:33:59.467368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.897 [2024-07-12 14:33:59.558158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.463 14:34:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.463 14:34:00 -- common/autotest_common.sh@862 -- # return 0 00:05:23.463 14:34:00 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:23.463 14:34:00 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:23.463 14:34:00 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:05:26.752 nvme0n1 00:05:26.752 14:34:03 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:26.752 [2024-07-12 14:34:03.416942] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:26.752 request: 00:05:26.752 { 00:05:26.752 "nvme_ctrlr_name": "nvme0", 00:05:26.752 "password": "test", 00:05:26.752 "method": "bdev_nvme_opal_revert", 00:05:26.752 "req_id": 1 00:05:26.752 } 00:05:26.752 Got JSON-RPC error response 00:05:26.752 response: 00:05:26.752 { 00:05:26.752 "code": -32602, 00:05:26.752 "message": "Invalid parameters" 00:05:26.752 } 00:05:26.752 14:34:03 -- common/autotest_common.sh@1604 -- # true 00:05:26.752 14:34:03 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:26.752 14:34:03 -- common/autotest_common.sh@1608 -- # killprocess 1404216 00:05:26.752 14:34:03 -- common/autotest_common.sh@948 -- # '[' -z 1404216 ']' 00:05:26.752 14:34:03 -- common/autotest_common.sh@952 -- # kill -0 1404216 00:05:26.752 14:34:03 -- common/autotest_common.sh@953 -- # uname 00:05:26.752 14:34:03 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.752 14:34:03 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1404216 00:05:26.752 14:34:03 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.753 14:34:03 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.753 14:34:03 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1404216' 00:05:26.753 killing process with pid 1404216 00:05:26.753 14:34:03 -- common/autotest_common.sh@967 -- # kill 1404216 00:05:26.753 14:34:03 -- common/autotest_common.sh@972 -- # wait 1404216 00:05:30.950 14:34:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:30.950 14:34:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:30.950 14:34:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:30.950 14:34:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:30.950 14:34:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:30.950 14:34:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.950 14:34:07 -- common/autotest_common.sh@10 -- # set +x 00:05:30.950 14:34:07 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:30.950 14:34:07 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:30.950 14:34:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.950 14:34:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.950 14:34:07 -- common/autotest_common.sh@10 -- # set +x 00:05:30.950 ************************************ 00:05:30.950 START TEST env 00:05:30.950 ************************************ 00:05:30.950 14:34:07 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:30.950 * Looking for test storage... 00:05:30.950 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:30.950 14:34:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.950 14:34:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.950 14:34:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.950 14:34:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.950 ************************************ 00:05:30.950 START TEST env_memory 00:05:30.950 ************************************ 00:05:30.950 14:34:07 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.950 00:05:30.950 00:05:30.950 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.950 http://cunit.sourceforge.net/ 00:05:30.950 00:05:30.950 00:05:30.950 Suite: memory 00:05:30.950 Test: alloc and free memory map ...[2024-07-12 14:34:07.625403] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.950 passed 00:05:30.950 Test: mem map translation ...[2024-07-12 14:34:07.638955] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.950 [2024-07-12 14:34:07.638975] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.950 [2024-07-12 14:34:07.639006] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.950 [2024-07-12 14:34:07.639016] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.950 passed 00:05:30.950 Test: mem map registration ...[2024-07-12 14:34:07.660120] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:30.950 [2024-07-12 14:34:07.660145] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:30.950 passed 00:05:30.950 Test: mem map adjacent registrations ...passed 00:05:30.950 00:05:30.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.950 suites 1 1 n/a 0 0 00:05:30.950 tests 4 4 4 0 0 00:05:30.950 asserts 152 152 152 0 n/a 00:05:30.950 00:05:30.950 Elapsed time = 0.087 seconds 00:05:30.950 00:05:30.950 real 0m0.100s 00:05:30.950 user 0m0.090s 00:05:30.950 sys 0m0.010s 00:05:30.950 14:34:07 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.950 14:34:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:30.950 ************************************ 00:05:30.950 END TEST env_memory 00:05:30.950 ************************************ 00:05:30.950 14:34:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.950 14:34:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.950 14:34:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.950 14:34:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.950 14:34:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.210 ************************************ 00:05:31.210 START TEST env_vtophys 00:05:31.210 ************************************ 00:05:31.210 14:34:07 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:31.210 EAL: lib.eal log level changed from notice to debug 00:05:31.210 EAL: Detected lcore 0 as core 0 on socket 0 00:05:31.210 EAL: Detected lcore 1 as core 1 on socket 0 00:05:31.210 EAL: Detected lcore 2 as core 2 on socket 0 00:05:31.210 EAL: Detected lcore 3 as core 3 on socket 0 00:05:31.210 EAL: Detected lcore 4 as core 4 on socket 0 00:05:31.210 EAL: Detected lcore 5 as core 8 on socket 0 00:05:31.210 EAL: Detected lcore 6 as core 9 on socket 0 00:05:31.210 EAL: Detected lcore 7 as core 10 on socket 0 00:05:31.210 EAL: Detected lcore 8 as core 11 on socket 0 00:05:31.210 EAL: Detected lcore 9 as core 16 on socket 0 00:05:31.210 EAL: Detected lcore 10 as core 17 on socket 0 00:05:31.210 EAL: Detected lcore 11 as core 18 on socket 0 00:05:31.210 EAL: Detected lcore 12 as core 19 on socket 0 00:05:31.210 EAL: Detected lcore 13 as core 20 on socket 0 00:05:31.210 EAL: Detected lcore 14 as core 24 on socket 0 00:05:31.210 EAL: Detected lcore 15 as core 25 on socket 0 00:05:31.210 EAL: Detected lcore 16 as core 26 on socket 0 00:05:31.210 EAL: Detected lcore 17 as core 27 on socket 0 00:05:31.210 EAL: Detected lcore 18 as core 0 on socket 1 00:05:31.210 EAL: Detected lcore 19 as core 1 on socket 1 00:05:31.210 EAL: Detected lcore 20 as core 2 on socket 1 00:05:31.210 EAL: Detected lcore 21 as core 3 on socket 1 00:05:31.210 EAL: Detected lcore 22 as core 4 on socket 1 00:05:31.210 EAL: Detected lcore 23 as core 8 on socket 1 00:05:31.210 EAL: Detected lcore 24 as core 9 on socket 1 00:05:31.210 EAL: Detected lcore 25 as core 10 on socket 1 00:05:31.210 EAL: Detected lcore 26 as core 11 on socket 1 00:05:31.210 EAL: Detected lcore 27 as core 16 on socket 1 00:05:31.210 EAL: Detected lcore 28 as core 17 on socket 1 00:05:31.210 EAL: Detected lcore 29 as core 18 on socket 1 00:05:31.210 EAL: Detected lcore 30 as core 19 on socket 1 00:05:31.210 EAL: Detected lcore 31 as core 20 on socket 1 00:05:31.210 EAL: Detected lcore 32 as core 24 on socket 1 00:05:31.210 EAL: Detected lcore 33 as core 25 on socket 1 00:05:31.210 EAL: Detected lcore 34 as core 26 on socket 1 00:05:31.210 EAL: Detected lcore 35 as core 27 on socket 1 00:05:31.210 EAL: Detected lcore 36 as core 0 on socket 0 00:05:31.210 EAL: Detected lcore 37 as core 1 on socket 0 00:05:31.210 EAL: Detected lcore 38 as core 2 on socket 0 00:05:31.211 EAL: Detected lcore 39 as core 3 on socket 0 00:05:31.211 EAL: Detected lcore 40 as core 4 on socket 0 00:05:31.211 EAL: Detected lcore 41 as core 8 on socket 0 00:05:31.211 EAL: Detected lcore 42 as core 9 on socket 0 00:05:31.211 EAL: Detected lcore 43 as core 10 on socket 0 00:05:31.211 EAL: Detected lcore 44 as core 11 on socket 0 00:05:31.211 EAL: Detected lcore 45 as core 16 on socket 0 00:05:31.211 EAL: Detected lcore 46 as core 17 on socket 0 00:05:31.211 EAL: Detected lcore 47 as core 18 on socket 0 00:05:31.211 EAL: Detected lcore 48 as core 19 on socket 0 00:05:31.211 EAL: Detected lcore 49 as core 20 on socket 0 00:05:31.211 EAL: Detected lcore 50 as core 24 on socket 0 00:05:31.211 EAL: Detected lcore 51 as core 25 on socket 0 00:05:31.211 EAL: Detected lcore 52 as core 26 on socket 0 00:05:31.211 EAL: Detected lcore 53 as core 27 on socket 0 00:05:31.211 EAL: Detected lcore 54 as core 0 on socket 1 00:05:31.211 EAL: Detected lcore 55 as core 1 on socket 1 00:05:31.211 EAL: Detected lcore 56 as core 2 on socket 1 00:05:31.211 EAL: Detected lcore 57 as core 3 on socket 1 00:05:31.211 EAL: Detected lcore 58 as core 4 on socket 1 00:05:31.211 EAL: Detected lcore 59 as core 8 on socket 1 00:05:31.211 EAL: Detected lcore 60 as core 9 on socket 1 00:05:31.211 EAL: Detected lcore 61 as core 10 on socket 1 00:05:31.211 EAL: Detected lcore 62 as core 11 on socket 1 00:05:31.211 EAL: Detected lcore 63 as core 16 on socket 1 00:05:31.211 EAL: Detected lcore 64 as core 17 on socket 1 00:05:31.211 EAL: Detected lcore 65 as core 18 on socket 1 00:05:31.211 EAL: Detected lcore 66 as core 19 on socket 1 00:05:31.211 EAL: Detected lcore 67 as core 20 on socket 1 00:05:31.211 EAL: Detected lcore 68 as core 24 on socket 1 00:05:31.211 EAL: Detected lcore 69 as core 25 on socket 1 00:05:31.211 EAL: Detected lcore 70 as core 26 on socket 1 00:05:31.211 EAL: Detected lcore 71 as core 27 on socket 1 00:05:31.211 EAL: Maximum logical cores by configuration: 128 00:05:31.211 EAL: Detected CPU lcores: 72 00:05:31.211 EAL: Detected NUMA nodes: 2 00:05:31.211 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:31.211 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:31.211 EAL: Checking presence of .so 'librte_eal.so' 00:05:31.211 EAL: Detected static linkage of DPDK 00:05:31.211 EAL: No shared files mode enabled, IPC will be disabled 00:05:31.211 EAL: Bus pci wants IOVA as 'DC' 00:05:31.211 EAL: Buses did not request a specific IOVA mode. 00:05:31.211 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:31.211 EAL: Selected IOVA mode 'VA' 00:05:31.211 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.211 EAL: Probing VFIO support... 00:05:31.211 EAL: IOMMU type 1 (Type 1) is supported 00:05:31.211 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:31.211 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:31.211 EAL: VFIO support initialized 00:05:31.211 EAL: Ask a virtual area of 0x2e000 bytes 00:05:31.211 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:31.211 EAL: Setting up physically contiguous memory... 00:05:31.211 EAL: Setting maximum number of open files to 524288 00:05:31.211 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:31.211 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:31.211 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:31.211 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:31.211 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.211 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:31.211 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:31.211 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.211 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:31.211 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:31.211 EAL: Hugepages will be freed exactly as allocated. 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: TSC frequency is ~2300000 KHz 00:05:31.211 EAL: Main lcore 0 is ready (tid=7fa778520a00;cpuset=[0]) 00:05:31.211 EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 0 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was expanded by 2MB 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Mem event callback 'spdk:(nil)' registered 00:05:31.211 00:05:31.211 00:05:31.211 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.211 http://cunit.sourceforge.net/ 00:05:31.211 00:05:31.211 00:05:31.211 Suite: components_suite 00:05:31.211 Test: vtophys_malloc_test ...passed 00:05:31.211 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 4 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was expanded by 4MB 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was shrunk by 4MB 00:05:31.211 EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 4 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was expanded by 6MB 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was shrunk by 6MB 00:05:31.211 EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 4 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was expanded by 10MB 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was shrunk by 10MB 00:05:31.211 EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 4 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was expanded by 18MB 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was shrunk by 18MB 00:05:31.211 EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 4 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was expanded by 34MB 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was shrunk by 34MB 00:05:31.211 EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 4 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was expanded by 66MB 00:05:31.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.211 EAL: request: mp_malloc_sync 00:05:31.211 EAL: No shared files mode enabled, IPC is disabled 00:05:31.211 EAL: Heap on socket 0 was shrunk by 66MB 00:05:31.211 EAL: Trying to obtain current memory policy. 00:05:31.211 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.211 EAL: Restoring previous memory policy: 4 00:05:31.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.212 EAL: request: mp_malloc_sync 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: Heap on socket 0 was expanded by 130MB 00:05:31.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.471 EAL: request: mp_malloc_sync 00:05:31.471 EAL: No shared files mode enabled, IPC is disabled 00:05:31.471 EAL: Heap on socket 0 was shrunk by 130MB 00:05:31.471 EAL: Trying to obtain current memory policy. 00:05:31.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.471 EAL: Restoring previous memory policy: 4 00:05:31.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.471 EAL: request: mp_malloc_sync 00:05:31.471 EAL: No shared files mode enabled, IPC is disabled 00:05:31.471 EAL: Heap on socket 0 was expanded by 258MB 00:05:31.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.471 EAL: request: mp_malloc_sync 00:05:31.471 EAL: No shared files mode enabled, IPC is disabled 00:05:31.471 EAL: Heap on socket 0 was shrunk by 258MB 00:05:31.471 EAL: Trying to obtain current memory policy. 00:05:31.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.731 EAL: Restoring previous memory policy: 4 00:05:31.731 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.731 EAL: request: mp_malloc_sync 00:05:31.731 EAL: No shared files mode enabled, IPC is disabled 00:05:31.731 EAL: Heap on socket 0 was expanded by 514MB 00:05:31.731 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.731 EAL: request: mp_malloc_sync 00:05:31.731 EAL: No shared files mode enabled, IPC is disabled 00:05:31.731 EAL: Heap on socket 0 was shrunk by 514MB 00:05:31.731 EAL: Trying to obtain current memory policy. 00:05:31.731 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.991 EAL: Restoring previous memory policy: 4 00:05:31.991 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.991 EAL: request: mp_malloc_sync 00:05:31.991 EAL: No shared files mode enabled, IPC is disabled 00:05:31.991 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.250 EAL: request: mp_malloc_sync 00:05:32.250 EAL: No shared files mode enabled, IPC is disabled 00:05:32.250 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.250 passed 00:05:32.250 00:05:32.250 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.250 suites 1 1 n/a 0 0 00:05:32.250 tests 2 2 2 0 0 00:05:32.250 asserts 497 497 497 0 n/a 00:05:32.250 00:05:32.250 Elapsed time = 1.114 seconds 00:05:32.250 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.250 EAL: request: mp_malloc_sync 00:05:32.250 EAL: No shared files mode enabled, IPC is disabled 00:05:32.250 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.250 EAL: No shared files mode enabled, IPC is disabled 00:05:32.250 EAL: No shared files mode enabled, IPC is disabled 00:05:32.250 EAL: No shared files mode enabled, IPC is disabled 00:05:32.250 00:05:32.250 real 0m1.252s 00:05:32.250 user 0m0.718s 00:05:32.250 sys 0m0.506s 00:05:32.250 14:34:09 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.250 14:34:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:32.250 ************************************ 00:05:32.250 END TEST env_vtophys 00:05:32.250 ************************************ 00:05:32.509 14:34:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:32.509 14:34:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:32.509 14:34:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.509 14:34:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.509 14:34:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.509 ************************************ 00:05:32.509 START TEST env_pci 00:05:32.509 ************************************ 00:05:32.509 14:34:09 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:32.509 00:05:32.509 00:05:32.509 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.509 http://cunit.sourceforge.net/ 00:05:32.509 00:05:32.509 00:05:32.509 Suite: pci 00:05:32.509 Test: pci_hook ...[2024-07-12 14:34:09.132665] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1405530 has claimed it 00:05:32.509 EAL: Cannot find device (10000:00:01.0) 00:05:32.509 EAL: Failed to attach device on primary process 00:05:32.509 passed 00:05:32.509 00:05:32.509 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.509 suites 1 1 n/a 0 0 00:05:32.509 tests 1 1 1 0 0 00:05:32.509 asserts 25 25 25 0 n/a 00:05:32.509 00:05:32.509 Elapsed time = 0.037 seconds 00:05:32.509 00:05:32.509 real 0m0.058s 00:05:32.509 user 0m0.014s 00:05:32.509 sys 0m0.044s 00:05:32.509 14:34:09 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.509 14:34:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:32.509 ************************************ 00:05:32.509 END TEST env_pci 00:05:32.509 ************************************ 00:05:32.509 14:34:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:32.509 14:34:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:32.509 14:34:09 env -- env/env.sh@15 -- # uname 00:05:32.509 14:34:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:32.509 14:34:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:32.509 14:34:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.509 14:34:09 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:32.509 14:34:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.509 14:34:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.509 ************************************ 00:05:32.509 START TEST env_dpdk_post_init 00:05:32.509 ************************************ 00:05:32.509 14:34:09 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.509 EAL: Detected CPU lcores: 72 00:05:32.509 EAL: Detected NUMA nodes: 2 00:05:32.768 EAL: Detected static linkage of DPDK 00:05:32.768 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.768 EAL: Selected IOVA mode 'VA' 00:05:32.768 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.768 EAL: VFIO support initialized 00:05:32.768 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.768 EAL: Using IOMMU type 1 (Type 1) 00:05:33.706 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:05:38.974 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:05:38.974 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:05:38.974 Starting DPDK initialization... 00:05:38.974 Starting SPDK post initialization... 00:05:38.974 SPDK NVMe probe 00:05:38.974 Attaching to 0000:1a:00.0 00:05:38.974 Attached to 0000:1a:00.0 00:05:38.974 Cleaning up... 00:05:38.974 00:05:38.974 real 0m6.493s 00:05:38.974 user 0m4.954s 00:05:38.975 sys 0m0.794s 00:05:38.975 14:34:15 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.975 14:34:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.975 ************************************ 00:05:38.975 END TEST env_dpdk_post_init 00:05:38.975 ************************************ 00:05:39.235 14:34:15 env -- common/autotest_common.sh@1142 -- # return 0 00:05:39.235 14:34:15 env -- env/env.sh@26 -- # uname 00:05:39.235 14:34:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.235 14:34:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.235 14:34:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.235 14:34:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.235 14:34:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.235 ************************************ 00:05:39.235 START TEST env_mem_callbacks 00:05:39.235 ************************************ 00:05:39.235 14:34:15 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.235 EAL: Detected CPU lcores: 72 00:05:39.235 EAL: Detected NUMA nodes: 2 00:05:39.235 EAL: Detected static linkage of DPDK 00:05:39.235 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.235 EAL: Selected IOVA mode 'VA' 00:05:39.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.235 EAL: VFIO support initialized 00:05:39.235 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.235 00:05:39.235 00:05:39.235 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.235 http://cunit.sourceforge.net/ 00:05:39.235 00:05:39.235 00:05:39.235 Suite: memory 00:05:39.235 Test: test ... 00:05:39.235 register 0x200000200000 2097152 00:05:39.235 malloc 3145728 00:05:39.235 register 0x200000400000 4194304 00:05:39.235 buf 0x200000500000 len 3145728 PASSED 00:05:39.235 malloc 64 00:05:39.235 buf 0x2000004fff40 len 64 PASSED 00:05:39.235 malloc 4194304 00:05:39.235 register 0x200000800000 6291456 00:05:39.235 buf 0x200000a00000 len 4194304 PASSED 00:05:39.235 free 0x200000500000 3145728 00:05:39.235 free 0x2000004fff40 64 00:05:39.235 unregister 0x200000400000 4194304 PASSED 00:05:39.235 free 0x200000a00000 4194304 00:05:39.235 unregister 0x200000800000 6291456 PASSED 00:05:39.235 malloc 8388608 00:05:39.235 register 0x200000400000 10485760 00:05:39.235 buf 0x200000600000 len 8388608 PASSED 00:05:39.235 free 0x200000600000 8388608 00:05:39.235 unregister 0x200000400000 10485760 PASSED 00:05:39.235 passed 00:05:39.235 00:05:39.235 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.235 suites 1 1 n/a 0 0 00:05:39.235 tests 1 1 1 0 0 00:05:39.235 asserts 15 15 15 0 n/a 00:05:39.235 00:05:39.235 Elapsed time = 0.009 seconds 00:05:39.235 00:05:39.235 real 0m0.075s 00:05:39.235 user 0m0.017s 00:05:39.235 sys 0m0.057s 00:05:39.235 14:34:15 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.235 14:34:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:39.235 ************************************ 00:05:39.235 END TEST env_mem_callbacks 00:05:39.235 ************************************ 00:05:39.235 14:34:15 env -- common/autotest_common.sh@1142 -- # return 0 00:05:39.235 00:05:39.235 real 0m8.523s 00:05:39.235 user 0m5.993s 00:05:39.235 sys 0m1.795s 00:05:39.235 14:34:15 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.235 14:34:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.235 ************************************ 00:05:39.235 END TEST env 00:05:39.235 ************************************ 00:05:39.235 14:34:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.235 14:34:16 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:39.235 14:34:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.235 14:34:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.235 14:34:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.494 ************************************ 00:05:39.494 START TEST rpc 00:05:39.494 ************************************ 00:05:39.494 14:34:16 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:39.494 * Looking for test storage... 00:05:39.494 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:39.494 14:34:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1406526 00:05:39.494 14:34:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.494 14:34:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:39.494 14:34:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1406526 00:05:39.494 14:34:16 rpc -- common/autotest_common.sh@829 -- # '[' -z 1406526 ']' 00:05:39.494 14:34:16 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.494 14:34:16 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.494 14:34:16 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.494 14:34:16 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.494 14:34:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.494 [2024-07-12 14:34:16.192839] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:39.494 [2024-07-12 14:34:16.192932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406526 ] 00:05:39.494 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.494 [2024-07-12 14:34:16.280038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.754 [2024-07-12 14:34:16.369243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:39.754 [2024-07-12 14:34:16.369280] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1406526' to capture a snapshot of events at runtime. 00:05:39.754 [2024-07-12 14:34:16.369289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.754 [2024-07-12 14:34:16.369298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.754 [2024-07-12 14:34:16.369305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1406526 for offline analysis/debug. 00:05:39.754 [2024-07-12 14:34:16.369329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.327 14:34:17 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.327 14:34:17 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:40.327 14:34:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:40.327 14:34:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:40.327 14:34:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:40.327 14:34:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:40.327 14:34:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.327 14:34:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.327 14:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.327 ************************************ 00:05:40.327 START TEST rpc_integrity 00:05:40.327 ************************************ 00:05:40.327 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:40.327 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.327 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.327 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.327 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.327 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.327 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.327 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.327 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.327 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.327 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.588 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.588 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:40.588 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.588 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.588 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.588 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.588 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.588 { 00:05:40.588 "name": "Malloc0", 00:05:40.588 "aliases": [ 00:05:40.588 "137b9ad3-4a1d-4bc7-9351-48cf83d1f570" 00:05:40.588 ], 00:05:40.588 "product_name": "Malloc disk", 00:05:40.588 "block_size": 512, 00:05:40.588 "num_blocks": 16384, 00:05:40.588 "uuid": "137b9ad3-4a1d-4bc7-9351-48cf83d1f570", 00:05:40.589 "assigned_rate_limits": { 00:05:40.589 "rw_ios_per_sec": 0, 00:05:40.589 "rw_mbytes_per_sec": 0, 00:05:40.589 "r_mbytes_per_sec": 0, 00:05:40.589 "w_mbytes_per_sec": 0 00:05:40.589 }, 00:05:40.589 "claimed": false, 00:05:40.589 "zoned": false, 00:05:40.589 "supported_io_types": { 00:05:40.589 "read": true, 00:05:40.589 "write": true, 00:05:40.589 "unmap": true, 00:05:40.589 "flush": true, 00:05:40.589 "reset": true, 00:05:40.589 "nvme_admin": false, 00:05:40.589 "nvme_io": false, 00:05:40.589 "nvme_io_md": false, 00:05:40.589 "write_zeroes": true, 00:05:40.589 "zcopy": true, 00:05:40.589 "get_zone_info": false, 00:05:40.589 "zone_management": false, 00:05:40.589 "zone_append": false, 00:05:40.589 "compare": false, 00:05:40.589 "compare_and_write": false, 00:05:40.589 "abort": true, 00:05:40.589 "seek_hole": false, 00:05:40.589 "seek_data": false, 00:05:40.589 "copy": true, 00:05:40.589 "nvme_iov_md": false 00:05:40.589 }, 00:05:40.589 "memory_domains": [ 00:05:40.589 { 00:05:40.589 "dma_device_id": "system", 00:05:40.589 "dma_device_type": 1 00:05:40.589 }, 00:05:40.589 { 00:05:40.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.589 "dma_device_type": 2 00:05:40.589 } 00:05:40.589 ], 00:05:40.589 "driver_specific": {} 00:05:40.589 } 00:05:40.589 ]' 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.589 [2024-07-12 14:34:17.192015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:40.589 [2024-07-12 14:34:17.192048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.589 [2024-07-12 14:34:17.192065] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x459f650 00:05:40.589 [2024-07-12 14:34:17.192075] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.589 [2024-07-12 14:34:17.192905] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.589 [2024-07-12 14:34:17.192928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.589 Passthru0 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.589 { 00:05:40.589 "name": "Malloc0", 00:05:40.589 "aliases": [ 00:05:40.589 "137b9ad3-4a1d-4bc7-9351-48cf83d1f570" 00:05:40.589 ], 00:05:40.589 "product_name": "Malloc disk", 00:05:40.589 "block_size": 512, 00:05:40.589 "num_blocks": 16384, 00:05:40.589 "uuid": "137b9ad3-4a1d-4bc7-9351-48cf83d1f570", 00:05:40.589 "assigned_rate_limits": { 00:05:40.589 "rw_ios_per_sec": 0, 00:05:40.589 "rw_mbytes_per_sec": 0, 00:05:40.589 "r_mbytes_per_sec": 0, 00:05:40.589 "w_mbytes_per_sec": 0 00:05:40.589 }, 00:05:40.589 "claimed": true, 00:05:40.589 "claim_type": "exclusive_write", 00:05:40.589 "zoned": false, 00:05:40.589 "supported_io_types": { 00:05:40.589 "read": true, 00:05:40.589 "write": true, 00:05:40.589 "unmap": true, 00:05:40.589 "flush": true, 00:05:40.589 "reset": true, 00:05:40.589 "nvme_admin": false, 00:05:40.589 "nvme_io": false, 00:05:40.589 "nvme_io_md": false, 00:05:40.589 "write_zeroes": true, 00:05:40.589 "zcopy": true, 00:05:40.589 "get_zone_info": false, 00:05:40.589 "zone_management": false, 00:05:40.589 "zone_append": false, 00:05:40.589 "compare": false, 00:05:40.589 "compare_and_write": false, 00:05:40.589 "abort": true, 00:05:40.589 "seek_hole": false, 00:05:40.589 "seek_data": false, 00:05:40.589 "copy": true, 00:05:40.589 "nvme_iov_md": false 00:05:40.589 }, 00:05:40.589 "memory_domains": [ 00:05:40.589 { 00:05:40.589 "dma_device_id": "system", 00:05:40.589 "dma_device_type": 1 00:05:40.589 }, 00:05:40.589 { 00:05:40.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.589 "dma_device_type": 2 00:05:40.589 } 00:05:40.589 ], 00:05:40.589 "driver_specific": {} 00:05:40.589 }, 00:05:40.589 { 00:05:40.589 "name": "Passthru0", 00:05:40.589 "aliases": [ 00:05:40.589 "858ff5a5-4d2c-566f-afb8-3d0ab0fe1b44" 00:05:40.589 ], 00:05:40.589 "product_name": "passthru", 00:05:40.589 "block_size": 512, 00:05:40.589 "num_blocks": 16384, 00:05:40.589 "uuid": "858ff5a5-4d2c-566f-afb8-3d0ab0fe1b44", 00:05:40.589 "assigned_rate_limits": { 00:05:40.589 "rw_ios_per_sec": 0, 00:05:40.589 "rw_mbytes_per_sec": 0, 00:05:40.589 "r_mbytes_per_sec": 0, 00:05:40.589 "w_mbytes_per_sec": 0 00:05:40.589 }, 00:05:40.589 "claimed": false, 00:05:40.589 "zoned": false, 00:05:40.589 "supported_io_types": { 00:05:40.589 "read": true, 00:05:40.589 "write": true, 00:05:40.589 "unmap": true, 00:05:40.589 "flush": true, 00:05:40.589 "reset": true, 00:05:40.589 "nvme_admin": false, 00:05:40.589 "nvme_io": false, 00:05:40.589 "nvme_io_md": false, 00:05:40.589 "write_zeroes": true, 00:05:40.589 "zcopy": true, 00:05:40.589 "get_zone_info": false, 00:05:40.589 "zone_management": false, 00:05:40.589 "zone_append": false, 00:05:40.589 "compare": false, 00:05:40.589 "compare_and_write": false, 00:05:40.589 "abort": true, 00:05:40.589 "seek_hole": false, 00:05:40.589 "seek_data": false, 00:05:40.589 "copy": true, 00:05:40.589 "nvme_iov_md": false 00:05:40.589 }, 00:05:40.589 "memory_domains": [ 00:05:40.589 { 00:05:40.589 "dma_device_id": "system", 00:05:40.589 "dma_device_type": 1 00:05:40.589 }, 00:05:40.589 { 00:05:40.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.589 "dma_device_type": 2 00:05:40.589 } 00:05:40.589 ], 00:05:40.589 "driver_specific": { 00:05:40.589 "passthru": { 00:05:40.589 "name": "Passthru0", 00:05:40.589 "base_bdev_name": "Malloc0" 00:05:40.589 } 00:05:40.589 } 00:05:40.589 } 00:05:40.589 ]' 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.589 14:34:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.589 00:05:40.589 real 0m0.286s 00:05:40.589 user 0m0.173s 00:05:40.589 sys 0m0.050s 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.589 14:34:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.589 ************************************ 00:05:40.589 END TEST rpc_integrity 00:05:40.589 ************************************ 00:05:40.589 14:34:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.589 14:34:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:40.589 14:34:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.589 14:34:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.589 14:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.848 ************************************ 00:05:40.848 START TEST rpc_plugins 00:05:40.848 ************************************ 00:05:40.848 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:40.848 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:40.848 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.848 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.848 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.848 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:40.848 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:40.848 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.848 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.848 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.848 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:40.848 { 00:05:40.848 "name": "Malloc1", 00:05:40.848 "aliases": [ 00:05:40.848 "100b29b9-a51a-4ce9-ad4a-b312bd90e527" 00:05:40.848 ], 00:05:40.848 "product_name": "Malloc disk", 00:05:40.848 "block_size": 4096, 00:05:40.848 "num_blocks": 256, 00:05:40.848 "uuid": "100b29b9-a51a-4ce9-ad4a-b312bd90e527", 00:05:40.848 "assigned_rate_limits": { 00:05:40.848 "rw_ios_per_sec": 0, 00:05:40.848 "rw_mbytes_per_sec": 0, 00:05:40.848 "r_mbytes_per_sec": 0, 00:05:40.848 "w_mbytes_per_sec": 0 00:05:40.848 }, 00:05:40.848 "claimed": false, 00:05:40.848 "zoned": false, 00:05:40.848 "supported_io_types": { 00:05:40.848 "read": true, 00:05:40.848 "write": true, 00:05:40.848 "unmap": true, 00:05:40.848 "flush": true, 00:05:40.848 "reset": true, 00:05:40.848 "nvme_admin": false, 00:05:40.848 "nvme_io": false, 00:05:40.848 "nvme_io_md": false, 00:05:40.848 "write_zeroes": true, 00:05:40.848 "zcopy": true, 00:05:40.848 "get_zone_info": false, 00:05:40.848 "zone_management": false, 00:05:40.848 "zone_append": false, 00:05:40.848 "compare": false, 00:05:40.848 "compare_and_write": false, 00:05:40.848 "abort": true, 00:05:40.848 "seek_hole": false, 00:05:40.848 "seek_data": false, 00:05:40.848 "copy": true, 00:05:40.848 "nvme_iov_md": false 00:05:40.848 }, 00:05:40.848 "memory_domains": [ 00:05:40.848 { 00:05:40.848 "dma_device_id": "system", 00:05:40.848 "dma_device_type": 1 00:05:40.848 }, 00:05:40.848 { 00:05:40.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.848 "dma_device_type": 2 00:05:40.848 } 00:05:40.848 ], 00:05:40.849 "driver_specific": {} 00:05:40.849 } 00:05:40.849 ]' 00:05:40.849 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:40.849 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:40.849 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.849 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.849 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:40.849 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:40.849 14:34:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:40.849 00:05:40.849 real 0m0.151s 00:05:40.849 user 0m0.087s 00:05:40.849 sys 0m0.028s 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.849 14:34:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.849 ************************************ 00:05:40.849 END TEST rpc_plugins 00:05:40.849 ************************************ 00:05:40.849 14:34:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.849 14:34:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:40.849 14:34:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.849 14:34:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.849 14:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.107 ************************************ 00:05:41.107 START TEST rpc_trace_cmd_test 00:05:41.107 ************************************ 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:41.107 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1406526", 00:05:41.107 "tpoint_group_mask": "0x8", 00:05:41.107 "iscsi_conn": { 00:05:41.107 "mask": "0x2", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "scsi": { 00:05:41.107 "mask": "0x4", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "bdev": { 00:05:41.107 "mask": "0x8", 00:05:41.107 "tpoint_mask": "0xffffffffffffffff" 00:05:41.107 }, 00:05:41.107 "nvmf_rdma": { 00:05:41.107 "mask": "0x10", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "nvmf_tcp": { 00:05:41.107 "mask": "0x20", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "ftl": { 00:05:41.107 "mask": "0x40", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "blobfs": { 00:05:41.107 "mask": "0x80", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "dsa": { 00:05:41.107 "mask": "0x200", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "thread": { 00:05:41.107 "mask": "0x400", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "nvme_pcie": { 00:05:41.107 "mask": "0x800", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "iaa": { 00:05:41.107 "mask": "0x1000", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "nvme_tcp": { 00:05:41.107 "mask": "0x2000", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "bdev_nvme": { 00:05:41.107 "mask": "0x4000", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 }, 00:05:41.107 "sock": { 00:05:41.107 "mask": "0x8000", 00:05:41.107 "tpoint_mask": "0x0" 00:05:41.107 } 00:05:41.107 }' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:41.107 00:05:41.107 real 0m0.238s 00:05:41.107 user 0m0.191s 00:05:41.107 sys 0m0.039s 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.107 14:34:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.107 ************************************ 00:05:41.107 END TEST rpc_trace_cmd_test 00:05:41.107 ************************************ 00:05:41.365 14:34:17 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.365 14:34:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:41.365 14:34:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:41.365 14:34:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:41.365 14:34:17 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.365 14:34:17 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.365 14:34:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.365 ************************************ 00:05:41.365 START TEST rpc_daemon_integrity 00:05:41.365 ************************************ 00:05:41.365 14:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:41.365 14:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.365 14:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.365 14:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.365 14:34:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.365 14:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.365 14:34:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.365 { 00:05:41.365 "name": "Malloc2", 00:05:41.365 "aliases": [ 00:05:41.365 "f249602e-6092-48fd-a6d9-7f050ab4df94" 00:05:41.365 ], 00:05:41.365 "product_name": "Malloc disk", 00:05:41.365 "block_size": 512, 00:05:41.365 "num_blocks": 16384, 00:05:41.365 "uuid": "f249602e-6092-48fd-a6d9-7f050ab4df94", 00:05:41.365 "assigned_rate_limits": { 00:05:41.365 "rw_ios_per_sec": 0, 00:05:41.365 "rw_mbytes_per_sec": 0, 00:05:41.365 "r_mbytes_per_sec": 0, 00:05:41.365 "w_mbytes_per_sec": 0 00:05:41.365 }, 00:05:41.365 "claimed": false, 00:05:41.365 "zoned": false, 00:05:41.365 "supported_io_types": { 00:05:41.365 "read": true, 00:05:41.365 "write": true, 00:05:41.365 "unmap": true, 00:05:41.365 "flush": true, 00:05:41.365 "reset": true, 00:05:41.365 "nvme_admin": false, 00:05:41.365 "nvme_io": false, 00:05:41.365 "nvme_io_md": false, 00:05:41.365 "write_zeroes": true, 00:05:41.365 "zcopy": true, 00:05:41.365 "get_zone_info": false, 00:05:41.365 "zone_management": false, 00:05:41.365 "zone_append": false, 00:05:41.365 "compare": false, 00:05:41.365 "compare_and_write": false, 00:05:41.365 "abort": true, 00:05:41.365 "seek_hole": false, 00:05:41.365 "seek_data": false, 00:05:41.365 "copy": true, 00:05:41.365 "nvme_iov_md": false 00:05:41.365 }, 00:05:41.365 "memory_domains": [ 00:05:41.365 { 00:05:41.365 "dma_device_id": "system", 00:05:41.365 "dma_device_type": 1 00:05:41.365 }, 00:05:41.365 { 00:05:41.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.365 "dma_device_type": 2 00:05:41.365 } 00:05:41.365 ], 00:05:41.365 "driver_specific": {} 00:05:41.365 } 00:05:41.365 ]' 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.365 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.365 [2024-07-12 14:34:18.114405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:41.365 [2024-07-12 14:34:18.114439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.365 [2024-07-12 14:34:18.114454] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x472d170 00:05:41.366 [2024-07-12 14:34:18.114464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.366 [2024-07-12 14:34:18.115202] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.366 [2024-07-12 14:34:18.115222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.366 Passthru0 00:05:41.366 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.366 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.366 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.366 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.624 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.624 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.624 { 00:05:41.624 "name": "Malloc2", 00:05:41.624 "aliases": [ 00:05:41.624 "f249602e-6092-48fd-a6d9-7f050ab4df94" 00:05:41.624 ], 00:05:41.624 "product_name": "Malloc disk", 00:05:41.624 "block_size": 512, 00:05:41.624 "num_blocks": 16384, 00:05:41.624 "uuid": "f249602e-6092-48fd-a6d9-7f050ab4df94", 00:05:41.624 "assigned_rate_limits": { 00:05:41.624 "rw_ios_per_sec": 0, 00:05:41.624 "rw_mbytes_per_sec": 0, 00:05:41.624 "r_mbytes_per_sec": 0, 00:05:41.624 "w_mbytes_per_sec": 0 00:05:41.624 }, 00:05:41.624 "claimed": true, 00:05:41.624 "claim_type": "exclusive_write", 00:05:41.624 "zoned": false, 00:05:41.624 "supported_io_types": { 00:05:41.624 "read": true, 00:05:41.624 "write": true, 00:05:41.624 "unmap": true, 00:05:41.624 "flush": true, 00:05:41.624 "reset": true, 00:05:41.624 "nvme_admin": false, 00:05:41.624 "nvme_io": false, 00:05:41.624 "nvme_io_md": false, 00:05:41.624 "write_zeroes": true, 00:05:41.624 "zcopy": true, 00:05:41.624 "get_zone_info": false, 00:05:41.624 "zone_management": false, 00:05:41.624 "zone_append": false, 00:05:41.624 "compare": false, 00:05:41.624 "compare_and_write": false, 00:05:41.624 "abort": true, 00:05:41.624 "seek_hole": false, 00:05:41.624 "seek_data": false, 00:05:41.624 "copy": true, 00:05:41.624 "nvme_iov_md": false 00:05:41.624 }, 00:05:41.624 "memory_domains": [ 00:05:41.624 { 00:05:41.624 "dma_device_id": "system", 00:05:41.624 "dma_device_type": 1 00:05:41.624 }, 00:05:41.624 { 00:05:41.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.624 "dma_device_type": 2 00:05:41.624 } 00:05:41.624 ], 00:05:41.624 "driver_specific": {} 00:05:41.624 }, 00:05:41.624 { 00:05:41.624 "name": "Passthru0", 00:05:41.624 "aliases": [ 00:05:41.624 "28e00413-4fc9-5f8f-a360-946972d5492d" 00:05:41.624 ], 00:05:41.624 "product_name": "passthru", 00:05:41.624 "block_size": 512, 00:05:41.624 "num_blocks": 16384, 00:05:41.624 "uuid": "28e00413-4fc9-5f8f-a360-946972d5492d", 00:05:41.624 "assigned_rate_limits": { 00:05:41.624 "rw_ios_per_sec": 0, 00:05:41.624 "rw_mbytes_per_sec": 0, 00:05:41.624 "r_mbytes_per_sec": 0, 00:05:41.624 "w_mbytes_per_sec": 0 00:05:41.624 }, 00:05:41.624 "claimed": false, 00:05:41.624 "zoned": false, 00:05:41.624 "supported_io_types": { 00:05:41.624 "read": true, 00:05:41.624 "write": true, 00:05:41.624 "unmap": true, 00:05:41.624 "flush": true, 00:05:41.624 "reset": true, 00:05:41.624 "nvme_admin": false, 00:05:41.624 "nvme_io": false, 00:05:41.624 "nvme_io_md": false, 00:05:41.624 "write_zeroes": true, 00:05:41.624 "zcopy": true, 00:05:41.624 "get_zone_info": false, 00:05:41.624 "zone_management": false, 00:05:41.624 "zone_append": false, 00:05:41.624 "compare": false, 00:05:41.624 "compare_and_write": false, 00:05:41.624 "abort": true, 00:05:41.624 "seek_hole": false, 00:05:41.624 "seek_data": false, 00:05:41.624 "copy": true, 00:05:41.624 "nvme_iov_md": false 00:05:41.624 }, 00:05:41.624 "memory_domains": [ 00:05:41.624 { 00:05:41.624 "dma_device_id": "system", 00:05:41.624 "dma_device_type": 1 00:05:41.624 }, 00:05:41.624 { 00:05:41.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.624 "dma_device_type": 2 00:05:41.624 } 00:05:41.624 ], 00:05:41.624 "driver_specific": { 00:05:41.624 "passthru": { 00:05:41.624 "name": "Passthru0", 00:05:41.624 "base_bdev_name": "Malloc2" 00:05:41.624 } 00:05:41.624 } 00:05:41.624 } 00:05:41.624 ]' 00:05:41.624 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.624 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.625 00:05:41.625 real 0m0.297s 00:05:41.625 user 0m0.181s 00:05:41.625 sys 0m0.056s 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.625 14:34:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.625 ************************************ 00:05:41.625 END TEST rpc_daemon_integrity 00:05:41.625 ************************************ 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.625 14:34:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:41.625 14:34:18 rpc -- rpc/rpc.sh@84 -- # killprocess 1406526 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@948 -- # '[' -z 1406526 ']' 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@952 -- # kill -0 1406526 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@953 -- # uname 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1406526 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1406526' 00:05:41.625 killing process with pid 1406526 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@967 -- # kill 1406526 00:05:41.625 14:34:18 rpc -- common/autotest_common.sh@972 -- # wait 1406526 00:05:42.192 00:05:42.192 real 0m2.645s 00:05:42.192 user 0m3.304s 00:05:42.192 sys 0m0.873s 00:05:42.192 14:34:18 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.192 14:34:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.192 ************************************ 00:05:42.192 END TEST rpc 00:05:42.192 ************************************ 00:05:42.192 14:34:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.192 14:34:18 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:42.192 14:34:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.192 14:34:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.192 14:34:18 -- common/autotest_common.sh@10 -- # set +x 00:05:42.192 ************************************ 00:05:42.192 START TEST skip_rpc 00:05:42.192 ************************************ 00:05:42.192 14:34:18 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:42.192 * Looking for test storage... 00:05:42.192 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:42.192 14:34:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:42.192 14:34:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:42.192 14:34:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.192 14:34:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.192 14:34:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.192 14:34:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.192 ************************************ 00:05:42.192 START TEST skip_rpc 00:05:42.192 ************************************ 00:05:42.192 14:34:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:42.192 14:34:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1407080 00:05:42.192 14:34:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.192 14:34:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:42.192 14:34:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:42.192 [2024-07-12 14:34:18.957837] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:42.192 [2024-07-12 14:34:18.957913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407080 ] 00:05:42.451 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.451 [2024-07-12 14:34:19.045562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.451 [2024-07-12 14:34:19.136107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1407080 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1407080 ']' 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1407080 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1407080 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1407080' 00:05:47.737 killing process with pid 1407080 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1407080 00:05:47.737 14:34:23 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1407080 00:05:47.737 00:05:47.737 real 0m5.411s 00:05:47.737 user 0m5.110s 00:05:47.737 sys 0m0.332s 00:05:47.737 14:34:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.737 14:34:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.737 ************************************ 00:05:47.737 END TEST skip_rpc 00:05:47.738 ************************************ 00:05:47.738 14:34:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:47.738 14:34:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.738 14:34:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.738 14:34:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.738 14:34:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.738 ************************************ 00:05:47.738 START TEST skip_rpc_with_json 00:05:47.738 ************************************ 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1407849 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1407849 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1407849 ']' 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.738 14:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.738 [2024-07-12 14:34:24.458356] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:47.738 [2024-07-12 14:34:24.458429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407849 ] 00:05:47.738 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.997 [2024-07-12 14:34:24.548637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.997 [2024-07-12 14:34:24.640441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.565 [2024-07-12 14:34:25.296976] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:48.565 request: 00:05:48.565 { 00:05:48.565 "trtype": "tcp", 00:05:48.565 "method": "nvmf_get_transports", 00:05:48.565 "req_id": 1 00:05:48.565 } 00:05:48.565 Got JSON-RPC error response 00:05:48.565 response: 00:05:48.565 { 00:05:48.565 "code": -19, 00:05:48.565 "message": "No such device" 00:05:48.565 } 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.565 [2024-07-12 14:34:25.309067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.565 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.825 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.825 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:48.825 { 00:05:48.825 "subsystems": [ 00:05:48.825 { 00:05:48.825 "subsystem": "scheduler", 00:05:48.825 "config": [ 00:05:48.825 { 00:05:48.825 "method": "framework_set_scheduler", 00:05:48.825 "params": { 00:05:48.825 "name": "static" 00:05:48.825 } 00:05:48.825 } 00:05:48.825 ] 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "subsystem": "vmd", 00:05:48.825 "config": [] 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "subsystem": "sock", 00:05:48.825 "config": [ 00:05:48.825 { 00:05:48.825 "method": "sock_set_default_impl", 00:05:48.825 "params": { 00:05:48.825 "impl_name": "posix" 00:05:48.825 } 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "method": "sock_impl_set_options", 00:05:48.825 "params": { 00:05:48.825 "impl_name": "ssl", 00:05:48.825 "recv_buf_size": 4096, 00:05:48.825 "send_buf_size": 4096, 00:05:48.825 "enable_recv_pipe": true, 00:05:48.825 "enable_quickack": false, 00:05:48.825 "enable_placement_id": 0, 00:05:48.825 "enable_zerocopy_send_server": true, 00:05:48.825 "enable_zerocopy_send_client": false, 00:05:48.825 "zerocopy_threshold": 0, 00:05:48.825 "tls_version": 0, 00:05:48.825 "enable_ktls": false 00:05:48.825 } 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "method": "sock_impl_set_options", 00:05:48.825 "params": { 00:05:48.825 "impl_name": "posix", 00:05:48.825 "recv_buf_size": 2097152, 00:05:48.825 "send_buf_size": 2097152, 00:05:48.825 "enable_recv_pipe": true, 00:05:48.825 "enable_quickack": false, 00:05:48.825 "enable_placement_id": 0, 00:05:48.825 "enable_zerocopy_send_server": true, 00:05:48.825 "enable_zerocopy_send_client": false, 00:05:48.825 "zerocopy_threshold": 0, 00:05:48.825 "tls_version": 0, 00:05:48.825 "enable_ktls": false 00:05:48.825 } 00:05:48.825 } 00:05:48.825 ] 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "subsystem": "iobuf", 00:05:48.825 "config": [ 00:05:48.825 { 00:05:48.825 "method": "iobuf_set_options", 00:05:48.825 "params": { 00:05:48.825 "small_pool_count": 8192, 00:05:48.825 "large_pool_count": 1024, 00:05:48.825 "small_bufsize": 8192, 00:05:48.825 "large_bufsize": 135168 00:05:48.825 } 00:05:48.825 } 00:05:48.825 ] 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "subsystem": "keyring", 00:05:48.825 "config": [] 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "subsystem": "vfio_user_target", 00:05:48.825 "config": null 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "subsystem": "accel", 00:05:48.825 "config": [ 00:05:48.825 { 00:05:48.825 "method": "accel_set_options", 00:05:48.825 "params": { 00:05:48.825 "small_cache_size": 128, 00:05:48.825 "large_cache_size": 16, 00:05:48.825 "task_count": 2048, 00:05:48.825 "sequence_count": 2048, 00:05:48.825 "buf_count": 2048 00:05:48.825 } 00:05:48.825 } 00:05:48.825 ] 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "subsystem": "bdev", 00:05:48.825 "config": [ 00:05:48.825 { 00:05:48.825 "method": "bdev_set_options", 00:05:48.825 "params": { 00:05:48.825 "bdev_io_pool_size": 65535, 00:05:48.825 "bdev_io_cache_size": 256, 00:05:48.825 "bdev_auto_examine": true, 00:05:48.825 "iobuf_small_cache_size": 128, 00:05:48.825 "iobuf_large_cache_size": 16 00:05:48.825 } 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "method": "bdev_raid_set_options", 00:05:48.825 "params": { 00:05:48.825 "process_window_size_kb": 1024 00:05:48.825 } 00:05:48.825 }, 00:05:48.825 { 00:05:48.825 "method": "bdev_nvme_set_options", 00:05:48.825 "params": { 00:05:48.825 "action_on_timeout": "none", 00:05:48.825 "timeout_us": 0, 00:05:48.825 "timeout_admin_us": 0, 00:05:48.825 "keep_alive_timeout_ms": 10000, 00:05:48.825 "arbitration_burst": 0, 00:05:48.825 "low_priority_weight": 0, 00:05:48.825 "medium_priority_weight": 0, 00:05:48.826 "high_priority_weight": 0, 00:05:48.826 "nvme_adminq_poll_period_us": 10000, 00:05:48.826 "nvme_ioq_poll_period_us": 0, 00:05:48.826 "io_queue_requests": 0, 00:05:48.826 "delay_cmd_submit": true, 00:05:48.826 "transport_retry_count": 4, 00:05:48.826 "bdev_retry_count": 3, 00:05:48.826 "transport_ack_timeout": 0, 00:05:48.826 "ctrlr_loss_timeout_sec": 0, 00:05:48.826 "reconnect_delay_sec": 0, 00:05:48.826 "fast_io_fail_timeout_sec": 0, 00:05:48.826 "disable_auto_failback": false, 00:05:48.826 "generate_uuids": false, 00:05:48.826 "transport_tos": 0, 00:05:48.826 "nvme_error_stat": false, 00:05:48.826 "rdma_srq_size": 0, 00:05:48.826 "io_path_stat": false, 00:05:48.826 "allow_accel_sequence": false, 00:05:48.826 "rdma_max_cq_size": 0, 00:05:48.826 "rdma_cm_event_timeout_ms": 0, 00:05:48.826 "dhchap_digests": [ 00:05:48.826 "sha256", 00:05:48.826 "sha384", 00:05:48.826 "sha512" 00:05:48.826 ], 00:05:48.826 "dhchap_dhgroups": [ 00:05:48.826 "null", 00:05:48.826 "ffdhe2048", 00:05:48.826 "ffdhe3072", 00:05:48.826 "ffdhe4096", 00:05:48.826 "ffdhe6144", 00:05:48.826 "ffdhe8192" 00:05:48.826 ] 00:05:48.826 } 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "method": "bdev_nvme_set_hotplug", 00:05:48.826 "params": { 00:05:48.826 "period_us": 100000, 00:05:48.826 "enable": false 00:05:48.826 } 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "method": "bdev_iscsi_set_options", 00:05:48.826 "params": { 00:05:48.826 "timeout_sec": 30 00:05:48.826 } 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "method": "bdev_wait_for_examine" 00:05:48.826 } 00:05:48.826 ] 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "subsystem": "nvmf", 00:05:48.826 "config": [ 00:05:48.826 { 00:05:48.826 "method": "nvmf_set_config", 00:05:48.826 "params": { 00:05:48.826 "discovery_filter": "match_any", 00:05:48.826 "admin_cmd_passthru": { 00:05:48.826 "identify_ctrlr": false 00:05:48.826 } 00:05:48.826 } 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "method": "nvmf_set_max_subsystems", 00:05:48.826 "params": { 00:05:48.826 "max_subsystems": 1024 00:05:48.826 } 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "method": "nvmf_set_crdt", 00:05:48.826 "params": { 00:05:48.826 "crdt1": 0, 00:05:48.826 "crdt2": 0, 00:05:48.826 "crdt3": 0 00:05:48.826 } 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "method": "nvmf_create_transport", 00:05:48.826 "params": { 00:05:48.826 "trtype": "TCP", 00:05:48.826 "max_queue_depth": 128, 00:05:48.826 "max_io_qpairs_per_ctrlr": 127, 00:05:48.826 "in_capsule_data_size": 4096, 00:05:48.826 "max_io_size": 131072, 00:05:48.826 "io_unit_size": 131072, 00:05:48.826 "max_aq_depth": 128, 00:05:48.826 "num_shared_buffers": 511, 00:05:48.826 "buf_cache_size": 4294967295, 00:05:48.826 "dif_insert_or_strip": false, 00:05:48.826 "zcopy": false, 00:05:48.826 "c2h_success": true, 00:05:48.826 "sock_priority": 0, 00:05:48.826 "abort_timeout_sec": 1, 00:05:48.826 "ack_timeout": 0, 00:05:48.826 "data_wr_pool_size": 0 00:05:48.826 } 00:05:48.826 } 00:05:48.826 ] 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "subsystem": "nbd", 00:05:48.826 "config": [] 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "subsystem": "ublk", 00:05:48.826 "config": [] 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "subsystem": "vhost_blk", 00:05:48.826 "config": [] 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "subsystem": "scsi", 00:05:48.826 "config": null 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "subsystem": "iscsi", 00:05:48.826 "config": [ 00:05:48.826 { 00:05:48.826 "method": "iscsi_set_options", 00:05:48.826 "params": { 00:05:48.826 "node_base": "iqn.2016-06.io.spdk", 00:05:48.826 "max_sessions": 128, 00:05:48.826 "max_connections_per_session": 2, 00:05:48.826 "max_queue_depth": 64, 00:05:48.826 "default_time2wait": 2, 00:05:48.826 "default_time2retain": 20, 00:05:48.826 "first_burst_length": 8192, 00:05:48.826 "immediate_data": true, 00:05:48.826 "allow_duplicated_isid": false, 00:05:48.826 "error_recovery_level": 0, 00:05:48.826 "nop_timeout": 60, 00:05:48.826 "nop_in_interval": 30, 00:05:48.826 "disable_chap": false, 00:05:48.826 "require_chap": false, 00:05:48.826 "mutual_chap": false, 00:05:48.826 "chap_group": 0, 00:05:48.826 "max_large_datain_per_connection": 64, 00:05:48.826 "max_r2t_per_connection": 4, 00:05:48.826 "pdu_pool_size": 36864, 00:05:48.826 "immediate_data_pool_size": 16384, 00:05:48.826 "data_out_pool_size": 2048 00:05:48.826 } 00:05:48.826 } 00:05:48.826 ] 00:05:48.826 }, 00:05:48.826 { 00:05:48.826 "subsystem": "vhost_scsi", 00:05:48.826 "config": [] 00:05:48.826 } 00:05:48.826 ] 00:05:48.826 } 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1407849 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1407849 ']' 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1407849 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1407849 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1407849' 00:05:48.826 killing process with pid 1407849 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1407849 00:05:48.826 14:34:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1407849 00:05:49.395 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1408077 00:05:49.395 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:49.395 14:34:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1408077 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1408077 ']' 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1408077 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1408077 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1408077' 00:05:54.670 killing process with pid 1408077 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1408077 00:05:54.670 14:34:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1408077 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:54.670 00:05:54.670 real 0m6.855s 00:05:54.670 user 0m6.565s 00:05:54.670 sys 0m0.745s 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 ************************************ 00:05:54.670 END TEST skip_rpc_with_json 00:05:54.670 ************************************ 00:05:54.670 14:34:31 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:54.670 14:34:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.670 14:34:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.670 14:34:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.670 14:34:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 ************************************ 00:05:54.670 START TEST skip_rpc_with_delay 00:05:54.670 ************************************ 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.670 [2024-07-12 14:34:31.405148] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.670 [2024-07-12 14:34:31.405288] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.670 00:05:54.670 real 0m0.048s 00:05:54.670 user 0m0.016s 00:05:54.670 sys 0m0.032s 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.670 14:34:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 ************************************ 00:05:54.670 END TEST skip_rpc_with_delay 00:05:54.670 ************************************ 00:05:54.929 14:34:31 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:54.929 14:34:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:54.929 14:34:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:54.929 14:34:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:54.929 14:34:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.929 14:34:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.929 14:34:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.929 ************************************ 00:05:54.929 START TEST exit_on_failed_rpc_init 00:05:54.929 ************************************ 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1408886 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1408886 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1408886 ']' 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.929 14:34:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.929 [2024-07-12 14:34:31.537297] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:54.929 [2024-07-12 14:34:31.537379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408886 ] 00:05:54.929 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.929 [2024-07-12 14:34:31.625172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.929 [2024-07-12 14:34:31.713982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.866 [2024-07-12 14:34:32.400016] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:55.866 [2024-07-12 14:34:32.400085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408905 ] 00:05:55.866 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.866 [2024-07-12 14:34:32.491238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.866 [2024-07-12 14:34:32.572153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.866 [2024-07-12 14:34:32.572243] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:55.866 [2024-07-12 14:34:32.572256] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:55.866 [2024-07-12 14:34:32.572264] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:55.866 14:34:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1408886 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1408886 ']' 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1408886 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1408886 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1408886' 00:05:56.125 killing process with pid 1408886 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1408886 00:05:56.125 14:34:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1408886 00:05:56.385 00:05:56.385 real 0m1.534s 00:05:56.385 user 0m1.730s 00:05:56.385 sys 0m0.468s 00:05:56.385 14:34:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.385 14:34:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.385 ************************************ 00:05:56.385 END TEST exit_on_failed_rpc_init 00:05:56.385 ************************************ 00:05:56.385 14:34:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.385 14:34:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:56.385 00:05:56.385 real 0m14.308s 00:05:56.385 user 0m13.585s 00:05:56.385 sys 0m1.909s 00:05:56.385 14:34:33 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.385 14:34:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.385 ************************************ 00:05:56.385 END TEST skip_rpc 00:05:56.385 ************************************ 00:05:56.385 14:34:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.385 14:34:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.385 14:34:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.385 14:34:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.385 14:34:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.646 ************************************ 00:05:56.646 START TEST rpc_client 00:05:56.646 ************************************ 00:05:56.646 14:34:33 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.646 * Looking for test storage... 00:05:56.646 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:56.646 14:34:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:56.646 OK 00:05:56.646 14:34:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.646 00:05:56.646 real 0m0.132s 00:05:56.646 user 0m0.053s 00:05:56.646 sys 0m0.090s 00:05:56.646 14:34:33 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.646 14:34:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.646 ************************************ 00:05:56.646 END TEST rpc_client 00:05:56.646 ************************************ 00:05:56.646 14:34:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.646 14:34:33 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.646 14:34:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.646 14:34:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.646 14:34:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.646 ************************************ 00:05:56.646 START TEST json_config 00:05:56.646 ************************************ 00:05:56.646 14:34:33 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:56.906 14:34:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.906 14:34:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.906 14:34:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.906 14:34:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.906 14:34:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.906 14:34:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.906 14:34:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.906 14:34:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@47 -- # : 0 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.906 14:34:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:56.906 WARNING: No tests are enabled so not running JSON configuration tests 00:05:56.906 14:34:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:56.906 00:05:56.906 real 0m0.110s 00:05:56.906 user 0m0.053s 00:05:56.906 sys 0m0.057s 00:05:56.906 14:34:33 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.906 14:34:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.906 ************************************ 00:05:56.906 END TEST json_config 00:05:56.906 ************************************ 00:05:56.906 14:34:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.906 14:34:33 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.906 14:34:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.906 14:34:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.906 14:34:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.906 ************************************ 00:05:56.906 START TEST json_config_extra_key 00:05:56.906 ************************************ 00:05:56.906 14:34:33 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.906 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.906 14:34:33 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:56.906 14:34:33 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.906 14:34:33 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.906 14:34:33 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.907 14:34:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.907 14:34:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.907 14:34:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.907 14:34:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.907 14:34:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.907 14:34:33 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.166 INFO: launching applications... 00:05:57.166 14:34:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1409224 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.166 Waiting for target to run... 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1409224 /var/tmp/spdk_tgt.sock 00:05:57.166 14:34:33 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1409224 ']' 00:05:57.166 14:34:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.166 14:34:33 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.166 14:34:33 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.166 14:34:33 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.166 14:34:33 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.166 14:34:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.166 [2024-07-12 14:34:33.727556] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:57.166 [2024-07-12 14:34:33.727652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409224 ] 00:05:57.166 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.734 [2024-07-12 14:34:34.230643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.734 [2024-07-12 14:34:34.328125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.992 14:34:34 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.992 14:34:34 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:57.992 14:34:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.992 00:05:57.992 14:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.992 INFO: shutting down applications... 00:05:57.993 14:34:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1409224 ]] 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1409224 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1409224 00:05:57.993 14:34:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.561 14:34:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.561 14:34:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.561 14:34:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1409224 00:05:58.561 14:34:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.561 14:34:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:58.561 14:34:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.561 14:34:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.561 SPDK target shutdown done 00:05:58.561 14:34:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:58.561 Success 00:05:58.561 00:05:58.561 real 0m1.492s 00:05:58.561 user 0m1.080s 00:05:58.561 sys 0m0.622s 00:05:58.561 14:34:35 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.561 14:34:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.561 ************************************ 00:05:58.561 END TEST json_config_extra_key 00:05:58.561 ************************************ 00:05:58.561 14:34:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.561 14:34:35 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.561 14:34:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.561 14:34:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.561 14:34:35 -- common/autotest_common.sh@10 -- # set +x 00:05:58.561 ************************************ 00:05:58.561 START TEST alias_rpc 00:05:58.561 ************************************ 00:05:58.561 14:34:35 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.561 * Looking for test storage... 00:05:58.561 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:58.561 14:34:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.561 14:34:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1409518 00:05:58.561 14:34:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1409518 00:05:58.561 14:34:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.561 14:34:35 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1409518 ']' 00:05:58.561 14:34:35 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.561 14:34:35 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.561 14:34:35 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.561 14:34:35 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.561 14:34:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.561 [2024-07-12 14:34:35.304417] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:05:58.561 [2024-07-12 14:34:35.304494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409518 ] 00:05:58.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.821 [2024-07-12 14:34:35.395988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.821 [2024-07-12 14:34:35.495012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.389 14:34:36 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.389 14:34:36 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:59.389 14:34:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:59.648 14:34:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1409518 00:05:59.648 14:34:36 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1409518 ']' 00:05:59.648 14:34:36 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1409518 00:05:59.648 14:34:36 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:59.648 14:34:36 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.648 14:34:36 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409518 00:05:59.648 14:34:36 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.649 14:34:36 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.649 14:34:36 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409518' 00:05:59.649 killing process with pid 1409518 00:05:59.649 14:34:36 alias_rpc -- common/autotest_common.sh@967 -- # kill 1409518 00:05:59.649 14:34:36 alias_rpc -- common/autotest_common.sh@972 -- # wait 1409518 00:06:00.217 00:06:00.217 real 0m1.571s 00:06:00.217 user 0m1.642s 00:06:00.217 sys 0m0.497s 00:06:00.217 14:34:36 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.217 14:34:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.217 ************************************ 00:06:00.217 END TEST alias_rpc 00:06:00.217 ************************************ 00:06:00.217 14:34:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.217 14:34:36 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:00.217 14:34:36 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.217 14:34:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.217 14:34:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.217 14:34:36 -- common/autotest_common.sh@10 -- # set +x 00:06:00.217 ************************************ 00:06:00.217 START TEST spdkcli_tcp 00:06:00.217 ************************************ 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.217 * Looking for test storage... 00:06:00.217 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1409844 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1409844 00:06:00.217 14:34:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1409844 ']' 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.217 14:34:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.217 [2024-07-12 14:34:36.964673] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:00.217 [2024-07-12 14:34:36.964746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409844 ] 00:06:00.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.476 [2024-07-12 14:34:37.055109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.476 [2024-07-12 14:34:37.144171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.476 [2024-07-12 14:34:37.144171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.046 14:34:37 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.046 14:34:37 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:01.046 14:34:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1409859 00:06:01.046 14:34:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.046 14:34:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:01.305 [ 00:06:01.305 "spdk_get_version", 00:06:01.305 "rpc_get_methods", 00:06:01.305 "trace_get_info", 00:06:01.305 "trace_get_tpoint_group_mask", 00:06:01.305 "trace_disable_tpoint_group", 00:06:01.305 "trace_enable_tpoint_group", 00:06:01.305 "trace_clear_tpoint_mask", 00:06:01.305 "trace_set_tpoint_mask", 00:06:01.305 "vfu_tgt_set_base_path", 00:06:01.305 "framework_get_pci_devices", 00:06:01.305 "framework_get_config", 00:06:01.305 "framework_get_subsystems", 00:06:01.305 "keyring_get_keys", 00:06:01.305 "iobuf_get_stats", 00:06:01.305 "iobuf_set_options", 00:06:01.305 "sock_get_default_impl", 00:06:01.305 "sock_set_default_impl", 00:06:01.305 "sock_impl_set_options", 00:06:01.305 "sock_impl_get_options", 00:06:01.305 "vmd_rescan", 00:06:01.305 "vmd_remove_device", 00:06:01.305 "vmd_enable", 00:06:01.305 "accel_get_stats", 00:06:01.305 "accel_set_options", 00:06:01.305 "accel_set_driver", 00:06:01.305 "accel_crypto_key_destroy", 00:06:01.305 "accel_crypto_keys_get", 00:06:01.305 "accel_crypto_key_create", 00:06:01.305 "accel_assign_opc", 00:06:01.305 "accel_get_module_info", 00:06:01.305 "accel_get_opc_assignments", 00:06:01.305 "notify_get_notifications", 00:06:01.305 "notify_get_types", 00:06:01.305 "bdev_get_histogram", 00:06:01.305 "bdev_enable_histogram", 00:06:01.305 "bdev_set_qos_limit", 00:06:01.305 "bdev_set_qd_sampling_period", 00:06:01.305 "bdev_get_bdevs", 00:06:01.305 "bdev_reset_iostat", 00:06:01.306 "bdev_get_iostat", 00:06:01.306 "bdev_examine", 00:06:01.306 "bdev_wait_for_examine", 00:06:01.306 "bdev_set_options", 00:06:01.306 "scsi_get_devices", 00:06:01.306 "thread_set_cpumask", 00:06:01.306 "framework_get_governor", 00:06:01.306 "framework_get_scheduler", 00:06:01.306 "framework_set_scheduler", 00:06:01.306 "framework_get_reactors", 00:06:01.306 "thread_get_io_channels", 00:06:01.306 "thread_get_pollers", 00:06:01.306 "thread_get_stats", 00:06:01.306 "framework_monitor_context_switch", 00:06:01.306 "spdk_kill_instance", 00:06:01.306 "log_enable_timestamps", 00:06:01.306 "log_get_flags", 00:06:01.306 "log_clear_flag", 00:06:01.306 "log_set_flag", 00:06:01.306 "log_get_level", 00:06:01.306 "log_set_level", 00:06:01.306 "log_get_print_level", 00:06:01.306 "log_set_print_level", 00:06:01.306 "framework_enable_cpumask_locks", 00:06:01.306 "framework_disable_cpumask_locks", 00:06:01.306 "framework_wait_init", 00:06:01.306 "framework_start_init", 00:06:01.306 "virtio_blk_create_transport", 00:06:01.306 "virtio_blk_get_transports", 00:06:01.306 "vhost_controller_set_coalescing", 00:06:01.306 "vhost_get_controllers", 00:06:01.306 "vhost_delete_controller", 00:06:01.306 "vhost_create_blk_controller", 00:06:01.306 "vhost_scsi_controller_remove_target", 00:06:01.306 "vhost_scsi_controller_add_target", 00:06:01.306 "vhost_start_scsi_controller", 00:06:01.306 "vhost_create_scsi_controller", 00:06:01.306 "ublk_recover_disk", 00:06:01.306 "ublk_get_disks", 00:06:01.306 "ublk_stop_disk", 00:06:01.306 "ublk_start_disk", 00:06:01.306 "ublk_destroy_target", 00:06:01.306 "ublk_create_target", 00:06:01.306 "nbd_get_disks", 00:06:01.306 "nbd_stop_disk", 00:06:01.306 "nbd_start_disk", 00:06:01.306 "env_dpdk_get_mem_stats", 00:06:01.306 "nvmf_stop_mdns_prr", 00:06:01.306 "nvmf_publish_mdns_prr", 00:06:01.306 "nvmf_subsystem_get_listeners", 00:06:01.306 "nvmf_subsystem_get_qpairs", 00:06:01.306 "nvmf_subsystem_get_controllers", 00:06:01.306 "nvmf_get_stats", 00:06:01.306 "nvmf_get_transports", 00:06:01.306 "nvmf_create_transport", 00:06:01.306 "nvmf_get_targets", 00:06:01.306 "nvmf_delete_target", 00:06:01.306 "nvmf_create_target", 00:06:01.306 "nvmf_subsystem_allow_any_host", 00:06:01.306 "nvmf_subsystem_remove_host", 00:06:01.306 "nvmf_subsystem_add_host", 00:06:01.306 "nvmf_ns_remove_host", 00:06:01.306 "nvmf_ns_add_host", 00:06:01.306 "nvmf_subsystem_remove_ns", 00:06:01.306 "nvmf_subsystem_add_ns", 00:06:01.306 "nvmf_subsystem_listener_set_ana_state", 00:06:01.306 "nvmf_discovery_get_referrals", 00:06:01.306 "nvmf_discovery_remove_referral", 00:06:01.306 "nvmf_discovery_add_referral", 00:06:01.306 "nvmf_subsystem_remove_listener", 00:06:01.306 "nvmf_subsystem_add_listener", 00:06:01.306 "nvmf_delete_subsystem", 00:06:01.306 "nvmf_create_subsystem", 00:06:01.306 "nvmf_get_subsystems", 00:06:01.306 "nvmf_set_crdt", 00:06:01.306 "nvmf_set_config", 00:06:01.306 "nvmf_set_max_subsystems", 00:06:01.306 "iscsi_get_histogram", 00:06:01.306 "iscsi_enable_histogram", 00:06:01.306 "iscsi_set_options", 00:06:01.306 "iscsi_get_auth_groups", 00:06:01.306 "iscsi_auth_group_remove_secret", 00:06:01.306 "iscsi_auth_group_add_secret", 00:06:01.306 "iscsi_delete_auth_group", 00:06:01.306 "iscsi_create_auth_group", 00:06:01.306 "iscsi_set_discovery_auth", 00:06:01.306 "iscsi_get_options", 00:06:01.306 "iscsi_target_node_request_logout", 00:06:01.306 "iscsi_target_node_set_redirect", 00:06:01.306 "iscsi_target_node_set_auth", 00:06:01.306 "iscsi_target_node_add_lun", 00:06:01.306 "iscsi_get_stats", 00:06:01.306 "iscsi_get_connections", 00:06:01.306 "iscsi_portal_group_set_auth", 00:06:01.306 "iscsi_start_portal_group", 00:06:01.306 "iscsi_delete_portal_group", 00:06:01.306 "iscsi_create_portal_group", 00:06:01.306 "iscsi_get_portal_groups", 00:06:01.306 "iscsi_delete_target_node", 00:06:01.306 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.306 "iscsi_target_node_add_pg_ig_maps", 00:06:01.306 "iscsi_create_target_node", 00:06:01.306 "iscsi_get_target_nodes", 00:06:01.306 "iscsi_delete_initiator_group", 00:06:01.306 "iscsi_initiator_group_remove_initiators", 00:06:01.306 "iscsi_initiator_group_add_initiators", 00:06:01.306 "iscsi_create_initiator_group", 00:06:01.306 "iscsi_get_initiator_groups", 00:06:01.306 "keyring_linux_set_options", 00:06:01.306 "keyring_file_remove_key", 00:06:01.306 "keyring_file_add_key", 00:06:01.306 "vfu_virtio_create_scsi_endpoint", 00:06:01.306 "vfu_virtio_scsi_remove_target", 00:06:01.306 "vfu_virtio_scsi_add_target", 00:06:01.306 "vfu_virtio_create_blk_endpoint", 00:06:01.306 "vfu_virtio_delete_endpoint", 00:06:01.306 "iaa_scan_accel_module", 00:06:01.306 "dsa_scan_accel_module", 00:06:01.306 "ioat_scan_accel_module", 00:06:01.306 "accel_error_inject_error", 00:06:01.306 "bdev_iscsi_delete", 00:06:01.306 "bdev_iscsi_create", 00:06:01.306 "bdev_iscsi_set_options", 00:06:01.306 "bdev_virtio_attach_controller", 00:06:01.306 "bdev_virtio_scsi_get_devices", 00:06:01.306 "bdev_virtio_detach_controller", 00:06:01.306 "bdev_virtio_blk_set_hotplug", 00:06:01.306 "bdev_ftl_set_property", 00:06:01.306 "bdev_ftl_get_properties", 00:06:01.306 "bdev_ftl_get_stats", 00:06:01.306 "bdev_ftl_unmap", 00:06:01.306 "bdev_ftl_unload", 00:06:01.306 "bdev_ftl_delete", 00:06:01.306 "bdev_ftl_load", 00:06:01.306 "bdev_ftl_create", 00:06:01.306 "bdev_aio_delete", 00:06:01.306 "bdev_aio_rescan", 00:06:01.306 "bdev_aio_create", 00:06:01.306 "blobfs_create", 00:06:01.306 "blobfs_detect", 00:06:01.306 "blobfs_set_cache_size", 00:06:01.306 "bdev_zone_block_delete", 00:06:01.306 "bdev_zone_block_create", 00:06:01.306 "bdev_delay_delete", 00:06:01.306 "bdev_delay_create", 00:06:01.306 "bdev_delay_update_latency", 00:06:01.306 "bdev_split_delete", 00:06:01.306 "bdev_split_create", 00:06:01.306 "bdev_error_inject_error", 00:06:01.306 "bdev_error_delete", 00:06:01.306 "bdev_error_create", 00:06:01.306 "bdev_raid_set_options", 00:06:01.306 "bdev_raid_remove_base_bdev", 00:06:01.306 "bdev_raid_add_base_bdev", 00:06:01.306 "bdev_raid_delete", 00:06:01.306 "bdev_raid_create", 00:06:01.306 "bdev_raid_get_bdevs", 00:06:01.306 "bdev_lvol_set_parent_bdev", 00:06:01.306 "bdev_lvol_set_parent", 00:06:01.306 "bdev_lvol_check_shallow_copy", 00:06:01.306 "bdev_lvol_start_shallow_copy", 00:06:01.306 "bdev_lvol_grow_lvstore", 00:06:01.306 "bdev_lvol_get_lvols", 00:06:01.306 "bdev_lvol_get_lvstores", 00:06:01.306 "bdev_lvol_delete", 00:06:01.306 "bdev_lvol_set_read_only", 00:06:01.306 "bdev_lvol_resize", 00:06:01.306 "bdev_lvol_decouple_parent", 00:06:01.306 "bdev_lvol_inflate", 00:06:01.306 "bdev_lvol_rename", 00:06:01.306 "bdev_lvol_clone_bdev", 00:06:01.306 "bdev_lvol_clone", 00:06:01.306 "bdev_lvol_snapshot", 00:06:01.306 "bdev_lvol_create", 00:06:01.306 "bdev_lvol_delete_lvstore", 00:06:01.306 "bdev_lvol_rename_lvstore", 00:06:01.306 "bdev_lvol_create_lvstore", 00:06:01.306 "bdev_passthru_delete", 00:06:01.306 "bdev_passthru_create", 00:06:01.306 "bdev_nvme_cuse_unregister", 00:06:01.306 "bdev_nvme_cuse_register", 00:06:01.306 "bdev_opal_new_user", 00:06:01.306 "bdev_opal_set_lock_state", 00:06:01.306 "bdev_opal_delete", 00:06:01.306 "bdev_opal_get_info", 00:06:01.306 "bdev_opal_create", 00:06:01.306 "bdev_nvme_opal_revert", 00:06:01.306 "bdev_nvme_opal_init", 00:06:01.306 "bdev_nvme_send_cmd", 00:06:01.306 "bdev_nvme_get_path_iostat", 00:06:01.306 "bdev_nvme_get_mdns_discovery_info", 00:06:01.306 "bdev_nvme_stop_mdns_discovery", 00:06:01.306 "bdev_nvme_start_mdns_discovery", 00:06:01.306 "bdev_nvme_set_multipath_policy", 00:06:01.306 "bdev_nvme_set_preferred_path", 00:06:01.306 "bdev_nvme_get_io_paths", 00:06:01.306 "bdev_nvme_remove_error_injection", 00:06:01.306 "bdev_nvme_add_error_injection", 00:06:01.306 "bdev_nvme_get_discovery_info", 00:06:01.306 "bdev_nvme_stop_discovery", 00:06:01.306 "bdev_nvme_start_discovery", 00:06:01.306 "bdev_nvme_get_controller_health_info", 00:06:01.306 "bdev_nvme_disable_controller", 00:06:01.306 "bdev_nvme_enable_controller", 00:06:01.306 "bdev_nvme_reset_controller", 00:06:01.306 "bdev_nvme_get_transport_statistics", 00:06:01.306 "bdev_nvme_apply_firmware", 00:06:01.306 "bdev_nvme_detach_controller", 00:06:01.306 "bdev_nvme_get_controllers", 00:06:01.306 "bdev_nvme_attach_controller", 00:06:01.306 "bdev_nvme_set_hotplug", 00:06:01.306 "bdev_nvme_set_options", 00:06:01.306 "bdev_null_resize", 00:06:01.306 "bdev_null_delete", 00:06:01.306 "bdev_null_create", 00:06:01.306 "bdev_malloc_delete", 00:06:01.306 "bdev_malloc_create" 00:06:01.306 ] 00:06:01.306 14:34:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.306 14:34:37 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.306 14:34:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.306 14:34:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.306 14:34:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1409844 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1409844 ']' 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1409844 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409844 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.306 14:34:38 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409844' 00:06:01.306 killing process with pid 1409844 00:06:01.307 14:34:38 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1409844 00:06:01.307 14:34:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1409844 00:06:01.875 00:06:01.875 real 0m1.574s 00:06:01.875 user 0m2.816s 00:06:01.875 sys 0m0.547s 00:06:01.875 14:34:38 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.875 14:34:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.875 ************************************ 00:06:01.875 END TEST spdkcli_tcp 00:06:01.875 ************************************ 00:06:01.875 14:34:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.875 14:34:38 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.875 14:34:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.875 14:34:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.875 14:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:01.875 ************************************ 00:06:01.875 START TEST dpdk_mem_utility 00:06:01.875 ************************************ 00:06:01.875 14:34:38 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.875 * Looking for test storage... 00:06:01.875 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.875 14:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.875 14:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1410095 00:06:01.875 14:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1410095 00:06:01.875 14:34:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.875 14:34:38 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1410095 ']' 00:06:01.875 14:34:38 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.875 14:34:38 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.875 14:34:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.875 14:34:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.875 14:34:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.875 [2024-07-12 14:34:38.613364] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:01.875 [2024-07-12 14:34:38.613452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410095 ] 00:06:01.875 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.135 [2024-07-12 14:34:38.701611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.135 [2024-07-12 14:34:38.790853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.703 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.703 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:02.703 14:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.703 14:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.703 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.703 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.703 { 00:06:02.703 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.703 } 00:06:02.703 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.703 14:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.965 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:02.965 1 heaps totaling size 814.000000 MiB 00:06:02.965 size: 814.000000 MiB heap id: 0 00:06:02.965 end heaps---------- 00:06:02.965 8 mempools totaling size 598.116089 MiB 00:06:02.965 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.965 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.965 size: 84.521057 MiB name: bdev_io_1410095 00:06:02.965 size: 51.011292 MiB name: evtpool_1410095 00:06:02.965 size: 50.003479 MiB name: msgpool_1410095 00:06:02.965 size: 21.763794 MiB name: PDU_Pool 00:06:02.965 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.965 size: 0.026123 MiB name: Session_Pool 00:06:02.965 end mempools------- 00:06:02.965 6 memzones totaling size 4.142822 MiB 00:06:02.965 size: 1.000366 MiB name: RG_ring_0_1410095 00:06:02.965 size: 1.000366 MiB name: RG_ring_1_1410095 00:06:02.965 size: 1.000366 MiB name: RG_ring_4_1410095 00:06:02.965 size: 1.000366 MiB name: RG_ring_5_1410095 00:06:02.965 size: 0.125366 MiB name: RG_ring_2_1410095 00:06:02.965 size: 0.015991 MiB name: RG_ring_3_1410095 00:06:02.965 end memzones------- 00:06:02.965 14:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.965 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:02.965 list of free elements. size: 12.519348 MiB 00:06:02.965 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.965 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:02.965 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:02.965 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:02.965 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:02.965 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:02.965 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:02.965 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:02.965 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:02.965 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:02.965 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:02.965 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:02.965 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:02.965 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:02.965 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:02.965 list of standard malloc elements. size: 199.218079 MiB 00:06:02.965 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:02.965 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:02.965 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.965 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:02.965 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.965 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.965 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:02.965 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.965 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:02.965 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.965 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:02.965 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:02.965 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:02.965 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:02.965 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:02.965 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:02.965 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:02.965 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:02.965 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:02.965 list of memzone associated elements. size: 602.262573 MiB 00:06:02.965 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:02.965 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.965 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:02.965 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.965 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:02.965 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1410095_0 00:06:02.965 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.965 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1410095_0 00:06:02.965 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.965 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1410095_0 00:06:02.965 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:02.965 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.965 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:02.965 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.965 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.965 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1410095 00:06:02.966 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.966 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1410095 00:06:02.966 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.966 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1410095 00:06:02.966 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:02.966 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.966 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:02.966 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.966 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:02.966 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.966 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:02.966 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.966 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.966 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1410095 00:06:02.966 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.966 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1410095 00:06:02.966 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:02.966 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1410095 00:06:02.966 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:02.966 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1410095 00:06:02.966 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:02.966 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1410095 00:06:02.966 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:02.966 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.966 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:02.966 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.966 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:02.966 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.966 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:02.966 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1410095 00:06:02.966 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:02.966 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.966 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:02.966 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.966 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:02.966 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1410095 00:06:02.966 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:02.966 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.966 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:02.966 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1410095 00:06:02.966 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:02.966 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1410095 00:06:02.966 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:02.966 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.966 14:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.966 14:34:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1410095 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1410095 ']' 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1410095 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1410095 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1410095' 00:06:02.966 killing process with pid 1410095 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1410095 00:06:02.966 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1410095 00:06:03.304 00:06:03.304 real 0m1.460s 00:06:03.304 user 0m1.454s 00:06:03.304 sys 0m0.491s 00:06:03.304 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.304 14:34:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.304 ************************************ 00:06:03.304 END TEST dpdk_mem_utility 00:06:03.304 ************************************ 00:06:03.304 14:34:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.304 14:34:39 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:03.304 14:34:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.304 14:34:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.304 14:34:39 -- common/autotest_common.sh@10 -- # set +x 00:06:03.304 ************************************ 00:06:03.304 START TEST event 00:06:03.304 ************************************ 00:06:03.304 14:34:40 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:03.563 * Looking for test storage... 00:06:03.563 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:03.563 14:34:40 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:03.563 14:34:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.563 14:34:40 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.563 14:34:40 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:03.563 14:34:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.563 14:34:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.563 ************************************ 00:06:03.563 START TEST event_perf 00:06:03.563 ************************************ 00:06:03.563 14:34:40 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.563 Running I/O for 1 seconds...[2024-07-12 14:34:40.192704] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:03.563 [2024-07-12 14:34:40.192787] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410338 ] 00:06:03.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.563 [2024-07-12 14:34:40.284442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.822 [2024-07-12 14:34:40.371856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.822 [2024-07-12 14:34:40.371958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.822 [2024-07-12 14:34:40.372058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.822 [2024-07-12 14:34:40.372059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.760 Running I/O for 1 seconds... 00:06:04.760 lcore 0: 186196 00:06:04.760 lcore 1: 186194 00:06:04.760 lcore 2: 186195 00:06:04.760 lcore 3: 186195 00:06:04.760 done. 00:06:04.760 00:06:04.760 real 0m1.274s 00:06:04.760 user 0m4.157s 00:06:04.760 sys 0m0.112s 00:06:04.760 14:34:41 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.760 14:34:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.760 ************************************ 00:06:04.760 END TEST event_perf 00:06:04.760 ************************************ 00:06:04.760 14:34:41 event -- common/autotest_common.sh@1142 -- # return 0 00:06:04.760 14:34:41 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.760 14:34:41 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:04.760 14:34:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.760 14:34:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.760 ************************************ 00:06:04.760 START TEST event_reactor 00:06:04.760 ************************************ 00:06:04.760 14:34:41 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:05.020 [2024-07-12 14:34:41.551709] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:05.020 [2024-07-12 14:34:41.551818] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410533 ] 00:06:05.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.020 [2024-07-12 14:34:41.645023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.020 [2024-07-12 14:34:41.729796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.395 test_start 00:06:06.395 oneshot 00:06:06.395 tick 100 00:06:06.395 tick 100 00:06:06.395 tick 250 00:06:06.395 tick 100 00:06:06.395 tick 100 00:06:06.395 tick 100 00:06:06.395 tick 250 00:06:06.395 tick 500 00:06:06.395 tick 100 00:06:06.395 tick 100 00:06:06.395 tick 250 00:06:06.395 tick 100 00:06:06.395 tick 100 00:06:06.395 test_end 00:06:06.395 00:06:06.395 real 0m1.270s 00:06:06.395 user 0m1.158s 00:06:06.395 sys 0m0.107s 00:06:06.395 14:34:42 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.395 14:34:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.395 ************************************ 00:06:06.395 END TEST event_reactor 00:06:06.395 ************************************ 00:06:06.395 14:34:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.395 14:34:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.395 14:34:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.395 14:34:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.395 14:34:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.395 ************************************ 00:06:06.395 START TEST event_reactor_perf 00:06:06.395 ************************************ 00:06:06.395 14:34:42 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.395 [2024-07-12 14:34:42.906445] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:06.395 [2024-07-12 14:34:42.906539] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410725 ] 00:06:06.395 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.395 [2024-07-12 14:34:42.997350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.395 [2024-07-12 14:34:43.087665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.772 test_start 00:06:07.772 test_end 00:06:07.772 Performance: 955756 events per second 00:06:07.772 00:06:07.772 real 0m1.275s 00:06:07.772 user 0m1.160s 00:06:07.772 sys 0m0.110s 00:06:07.772 14:34:44 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.772 14:34:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.772 ************************************ 00:06:07.772 END TEST event_reactor_perf 00:06:07.772 ************************************ 00:06:07.772 14:34:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:07.772 14:34:44 event -- event/event.sh@49 -- # uname -s 00:06:07.772 14:34:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.772 14:34:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.772 14:34:44 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.772 14:34:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.772 14:34:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.772 ************************************ 00:06:07.772 START TEST event_scheduler 00:06:07.772 ************************************ 00:06:07.772 14:34:44 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.772 * Looking for test storage... 00:06:07.772 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:07.772 14:34:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.772 14:34:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1410959 00:06:07.772 14:34:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.772 14:34:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.772 14:34:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1410959 00:06:07.772 14:34:44 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1410959 ']' 00:06:07.772 14:34:44 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.772 14:34:44 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.772 14:34:44 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.772 14:34:44 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.772 14:34:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.772 [2024-07-12 14:34:44.385325] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:07.772 [2024-07-12 14:34:44.385407] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410959 ] 00:06:07.772 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.772 [2024-07-12 14:34:44.480291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.031 [2024-07-12 14:34:44.571169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.031 [2024-07-12 14:34:44.571495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.031 [2024-07-12 14:34:44.571597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.031 [2024-07-12 14:34:44.571598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:08.598 14:34:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.598 [2024-07-12 14:34:45.238020] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:08.598 [2024-07-12 14:34:45.238043] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.598 [2024-07-12 14:34:45.238054] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.598 [2024-07-12 14:34:45.238062] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.598 [2024-07-12 14:34:45.238070] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.598 14:34:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.598 [2024-07-12 14:34:45.313002] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.598 14:34:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.598 14:34:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.598 ************************************ 00:06:08.598 START TEST scheduler_create_thread 00:06:08.598 ************************************ 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.598 2 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.598 3 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.598 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.857 4 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.858 5 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.858 6 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.858 7 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.858 8 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.858 9 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.858 10 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.858 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.425 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.425 14:34:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:09.425 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.425 14:34:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.801 14:34:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.801 14:34:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.801 14:34:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.801 14:34:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.801 14:34:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.735 14:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.735 00:06:11.735 real 0m3.104s 00:06:11.735 user 0m0.025s 00:06:11.735 sys 0m0.005s 00:06:11.735 14:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.735 14:34:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.735 ************************************ 00:06:11.735 END TEST scheduler_create_thread 00:06:11.735 ************************************ 00:06:11.735 14:34:48 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:11.735 14:34:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.735 14:34:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1410959 00:06:11.735 14:34:48 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1410959 ']' 00:06:11.735 14:34:48 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1410959 00:06:11.735 14:34:48 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:11.735 14:34:48 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.735 14:34:48 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1410959 00:06:11.994 14:34:48 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:11.994 14:34:48 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:11.994 14:34:48 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1410959' 00:06:11.994 killing process with pid 1410959 00:06:11.994 14:34:48 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1410959 00:06:11.994 14:34:48 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1410959 00:06:12.253 [2024-07-12 14:34:48.836165] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.512 00:06:12.512 real 0m4.811s 00:06:12.512 user 0m9.278s 00:06:12.512 sys 0m0.466s 00:06:12.512 14:34:49 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.512 14:34:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.512 ************************************ 00:06:12.512 END TEST event_scheduler 00:06:12.512 ************************************ 00:06:12.512 14:34:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.512 14:34:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.512 14:34:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.512 14:34:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.512 14:34:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.512 14:34:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.513 ************************************ 00:06:12.513 START TEST app_repeat 00:06:12.513 ************************************ 00:06:12.513 14:34:49 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1411702 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1411702' 00:06:12.513 Process app_repeat pid: 1411702 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.513 spdk_app_start Round 0 00:06:12.513 14:34:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1411702 /var/tmp/spdk-nbd.sock 00:06:12.513 14:34:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1411702 ']' 00:06:12.513 14:34:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.513 14:34:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.513 14:34:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.513 14:34:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.513 14:34:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.513 [2024-07-12 14:34:49.172883] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:12.513 [2024-07-12 14:34:49.172975] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411702 ] 00:06:12.513 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.513 [2024-07-12 14:34:49.262868] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.772 [2024-07-12 14:34:49.356174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.772 [2024-07-12 14:34:49.356175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.340 14:34:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.340 14:34:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:13.340 14:34:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.599 Malloc0 00:06:13.599 14:34:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.859 Malloc1 00:06:13.859 14:34:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.859 14:34:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.119 /dev/nbd0 00:06:14.119 14:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.119 14:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.119 1+0 records in 00:06:14.119 1+0 records out 00:06:14.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024664 s, 16.6 MB/s 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:14.119 14:34:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:14.119 14:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.119 14:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.119 14:34:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.119 /dev/nbd1 00:06:14.385 14:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.385 14:34:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.385 1+0 records in 00:06:14.385 1+0 records out 00:06:14.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274241 s, 14.9 MB/s 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:14.385 14:34:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:14.385 14:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.385 14:34:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.385 14:34:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.385 14:34:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.385 14:34:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.385 14:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.385 { 00:06:14.385 "nbd_device": "/dev/nbd0", 00:06:14.385 "bdev_name": "Malloc0" 00:06:14.385 }, 00:06:14.385 { 00:06:14.385 "nbd_device": "/dev/nbd1", 00:06:14.385 "bdev_name": "Malloc1" 00:06:14.385 } 00:06:14.385 ]' 00:06:14.385 14:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.385 { 00:06:14.385 "nbd_device": "/dev/nbd0", 00:06:14.385 "bdev_name": "Malloc0" 00:06:14.385 }, 00:06:14.385 { 00:06:14.385 "nbd_device": "/dev/nbd1", 00:06:14.385 "bdev_name": "Malloc1" 00:06:14.385 } 00:06:14.385 ]' 00:06:14.385 14:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.644 /dev/nbd1' 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.644 /dev/nbd1' 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.644 14:34:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.644 256+0 records in 00:06:14.644 256+0 records out 00:06:14.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107224 s, 97.8 MB/s 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.645 256+0 records in 00:06:14.645 256+0 records out 00:06:14.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021136 s, 49.6 MB/s 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.645 256+0 records in 00:06:14.645 256+0 records out 00:06:14.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225986 s, 46.4 MB/s 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.645 14:34:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.903 14:34:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.161 14:34:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.161 14:34:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.419 14:34:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.678 [2024-07-12 14:34:52.316447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.678 [2024-07-12 14:34:52.398366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.678 [2024-07-12 14:34:52.398366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.678 [2024-07-12 14:34:52.444008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.678 [2024-07-12 14:34:52.444054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.955 14:34:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.955 14:34:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.955 spdk_app_start Round 1 00:06:18.955 14:34:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1411702 /var/tmp/spdk-nbd.sock 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1411702 ']' 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.955 14:34:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:18.955 14:34:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.955 Malloc0 00:06:18.955 14:34:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.955 Malloc1 00:06:18.955 14:34:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.955 14:34:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.214 /dev/nbd0 00:06:19.214 14:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.214 14:34:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.214 1+0 records in 00:06:19.214 1+0 records out 00:06:19.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025778 s, 15.9 MB/s 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.214 14:34:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.214 14:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.214 14:34:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.214 14:34:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.473 /dev/nbd1 00:06:19.473 14:34:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.473 14:34:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.473 1+0 records in 00:06:19.473 1+0 records out 00:06:19.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264322 s, 15.5 MB/s 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.473 14:34:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.473 14:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.473 14:34:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.473 14:34:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.473 14:34:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.473 14:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.732 { 00:06:19.732 "nbd_device": "/dev/nbd0", 00:06:19.732 "bdev_name": "Malloc0" 00:06:19.732 }, 00:06:19.732 { 00:06:19.732 "nbd_device": "/dev/nbd1", 00:06:19.732 "bdev_name": "Malloc1" 00:06:19.732 } 00:06:19.732 ]' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.732 { 00:06:19.732 "nbd_device": "/dev/nbd0", 00:06:19.732 "bdev_name": "Malloc0" 00:06:19.732 }, 00:06:19.732 { 00:06:19.732 "nbd_device": "/dev/nbd1", 00:06:19.732 "bdev_name": "Malloc1" 00:06:19.732 } 00:06:19.732 ]' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.732 /dev/nbd1' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.732 /dev/nbd1' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.732 256+0 records in 00:06:19.732 256+0 records out 00:06:19.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114598 s, 91.5 MB/s 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.732 256+0 records in 00:06:19.732 256+0 records out 00:06:19.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208352 s, 50.3 MB/s 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.732 256+0 records in 00:06:19.732 256+0 records out 00:06:19.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224109 s, 46.8 MB/s 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.732 14:34:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.991 14:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.991 14:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.991 14:34:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.991 14:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.992 14:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.992 14:34:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.992 14:34:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.992 14:34:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.992 14:34:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.992 14:34:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.250 14:34:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.509 14:34:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.509 14:34:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.767 14:34:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.767 [2024-07-12 14:34:57.497172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.026 [2024-07-12 14:34:57.578462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.026 [2024-07-12 14:34:57.578462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.026 [2024-07-12 14:34:57.618499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.026 [2024-07-12 14:34:57.618547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.590 14:35:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.590 14:35:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:23.590 spdk_app_start Round 2 00:06:23.590 14:35:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1411702 /var/tmp/spdk-nbd.sock 00:06:23.590 14:35:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1411702 ']' 00:06:23.590 14:35:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.590 14:35:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.590 14:35:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.590 14:35:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.590 14:35:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.849 14:35:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.849 14:35:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:23.849 14:35:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.109 Malloc0 00:06:24.109 14:35:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.109 Malloc1 00:06:24.109 14:35:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.109 14:35:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.110 14:35:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.110 14:35:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.110 14:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.110 14:35:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.110 14:35:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.369 /dev/nbd0 00:06:24.369 14:35:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.369 14:35:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.369 1+0 records in 00:06:24.369 1+0 records out 00:06:24.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138562 s, 29.6 MB/s 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.369 14:35:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.369 14:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.369 14:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.369 14:35:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.629 /dev/nbd1 00:06:24.629 14:35:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.629 14:35:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.629 1+0 records in 00:06:24.629 1+0 records out 00:06:24.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279413 s, 14.7 MB/s 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.629 14:35:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.629 14:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.629 14:35:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.629 14:35:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.629 14:35:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.629 14:35:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.888 { 00:06:24.888 "nbd_device": "/dev/nbd0", 00:06:24.888 "bdev_name": "Malloc0" 00:06:24.888 }, 00:06:24.888 { 00:06:24.888 "nbd_device": "/dev/nbd1", 00:06:24.888 "bdev_name": "Malloc1" 00:06:24.888 } 00:06:24.888 ]' 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.888 { 00:06:24.888 "nbd_device": "/dev/nbd0", 00:06:24.888 "bdev_name": "Malloc0" 00:06:24.888 }, 00:06:24.888 { 00:06:24.888 "nbd_device": "/dev/nbd1", 00:06:24.888 "bdev_name": "Malloc1" 00:06:24.888 } 00:06:24.888 ]' 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.888 /dev/nbd1' 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.888 /dev/nbd1' 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.888 14:35:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.889 256+0 records in 00:06:24.889 256+0 records out 00:06:24.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114009 s, 92.0 MB/s 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.889 256+0 records in 00:06:24.889 256+0 records out 00:06:24.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209592 s, 50.0 MB/s 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.889 256+0 records in 00:06:24.889 256+0 records out 00:06:24.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227908 s, 46.0 MB/s 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.889 14:35:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.148 14:35:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.407 14:35:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.666 14:35:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.666 14:35:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.926 14:35:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.926 [2024-07-12 14:35:02.682589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.185 [2024-07-12 14:35:02.766170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.185 [2024-07-12 14:35:02.766171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.185 [2024-07-12 14:35:02.811888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.185 [2024-07-12 14:35:02.811933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.716 14:35:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1411702 /var/tmp/spdk-nbd.sock 00:06:28.716 14:35:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1411702 ']' 00:06:28.716 14:35:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.716 14:35:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.716 14:35:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.716 14:35:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.716 14:35:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:28.975 14:35:05 event.app_repeat -- event/event.sh@39 -- # killprocess 1411702 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1411702 ']' 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1411702 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1411702 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1411702' 00:06:28.975 killing process with pid 1411702 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1411702 00:06:28.975 14:35:05 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1411702 00:06:29.234 spdk_app_start is called in Round 0. 00:06:29.234 Shutdown signal received, stop current app iteration 00:06:29.234 Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 reinitialization... 00:06:29.234 spdk_app_start is called in Round 1. 00:06:29.234 Shutdown signal received, stop current app iteration 00:06:29.235 Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 reinitialization... 00:06:29.235 spdk_app_start is called in Round 2. 00:06:29.235 Shutdown signal received, stop current app iteration 00:06:29.235 Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 reinitialization... 00:06:29.235 spdk_app_start is called in Round 3. 00:06:29.235 Shutdown signal received, stop current app iteration 00:06:29.235 14:35:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.235 14:35:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.235 00:06:29.235 real 0m16.771s 00:06:29.235 user 0m35.579s 00:06:29.235 sys 0m3.337s 00:06:29.235 14:35:05 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.235 14:35:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.235 ************************************ 00:06:29.235 END TEST app_repeat 00:06:29.235 ************************************ 00:06:29.235 14:35:05 event -- common/autotest_common.sh@1142 -- # return 0 00:06:29.235 14:35:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.235 14:35:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.235 14:35:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.235 14:35:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.235 14:35:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.235 ************************************ 00:06:29.235 START TEST cpu_locks 00:06:29.235 ************************************ 00:06:29.235 14:35:05 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.494 * Looking for test storage... 00:06:29.494 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:29.494 14:35:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.494 14:35:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.494 14:35:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.494 14:35:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.494 14:35:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.494 14:35:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.494 14:35:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.494 ************************************ 00:06:29.494 START TEST default_locks 00:06:29.494 ************************************ 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1414059 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1414059 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1414059 ']' 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.494 14:35:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.494 [2024-07-12 14:35:06.173026] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:29.494 [2024-07-12 14:35:06.173114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414059 ] 00:06:29.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.494 [2024-07-12 14:35:06.246793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.759 [2024-07-12 14:35:06.340662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.332 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.332 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:30.332 14:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1414059 00:06:30.332 14:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1414059 00:06:30.332 14:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.900 lslocks: write error 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1414059 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1414059 ']' 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1414059 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1414059 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1414059' 00:06:30.900 killing process with pid 1414059 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1414059 00:06:30.900 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1414059 00:06:31.469 14:35:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1414059 00:06:31.469 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:31.469 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1414059 00:06:31.469 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:31.469 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.469 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:31.469 14:35:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1414059 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1414059 ']' 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.469 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1414059) - No such process 00:06:31.469 ERROR: process (pid: 1414059) is no longer running 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.469 00:06:31.469 real 0m1.861s 00:06:31.469 user 0m1.928s 00:06:31.469 sys 0m0.658s 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.469 14:35:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.469 ************************************ 00:06:31.469 END TEST default_locks 00:06:31.469 ************************************ 00:06:31.469 14:35:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.469 14:35:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:31.469 14:35:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.469 14:35:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.469 14:35:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.469 ************************************ 00:06:31.469 START TEST default_locks_via_rpc 00:06:31.469 ************************************ 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1414422 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1414422 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1414422 ']' 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.469 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.469 [2024-07-12 14:35:08.117999] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:31.469 [2024-07-12 14:35:08.118069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414422 ] 00:06:31.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.469 [2024-07-12 14:35:08.207867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.728 [2024-07-12 14:35:08.299292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1414422 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1414422 00:06:32.295 14:35:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1414422 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1414422 ']' 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1414422 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1414422 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1414422' 00:06:32.554 killing process with pid 1414422 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1414422 00:06:32.554 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1414422 00:06:33.122 00:06:33.122 real 0m1.566s 00:06:33.122 user 0m1.617s 00:06:33.122 sys 0m0.561s 00:06:33.122 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.122 14:35:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.122 ************************************ 00:06:33.122 END TEST default_locks_via_rpc 00:06:33.122 ************************************ 00:06:33.122 14:35:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.122 14:35:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:33.122 14:35:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.122 14:35:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.122 14:35:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.122 ************************************ 00:06:33.122 START TEST non_locking_app_on_locked_coremask 00:06:33.122 ************************************ 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1414634 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1414634 /var/tmp/spdk.sock 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1414634 ']' 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.122 14:35:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.122 [2024-07-12 14:35:09.764376] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:33.122 [2024-07-12 14:35:09.764456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414634 ] 00:06:33.122 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.122 [2024-07-12 14:35:09.851918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.382 [2024-07-12 14:35:09.942659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1414802 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1414802 /var/tmp/spdk2.sock 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1414802 ']' 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.950 14:35:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.950 [2024-07-12 14:35:10.601777] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:33.950 [2024-07-12 14:35:10.601829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414802 ] 00:06:33.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.950 [2024-07-12 14:35:10.698864] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.950 [2024-07-12 14:35:10.698888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.209 [2024-07-12 14:35:10.866131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.775 14:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.775 14:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:34.775 14:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1414634 00:06:34.775 14:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1414634 00:06:34.775 14:35:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.153 lslocks: write error 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1414634 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1414634 ']' 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1414634 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1414634 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1414634' 00:06:36.153 killing process with pid 1414634 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1414634 00:06:36.153 14:35:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1414634 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1414802 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1414802 ']' 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1414802 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1414802 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1414802' 00:06:36.728 killing process with pid 1414802 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1414802 00:06:36.728 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1414802 00:06:36.987 00:06:36.987 real 0m3.893s 00:06:36.987 user 0m4.053s 00:06:36.987 sys 0m1.258s 00:06:36.987 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.987 14:35:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.987 ************************************ 00:06:36.987 END TEST non_locking_app_on_locked_coremask 00:06:36.987 ************************************ 00:06:36.987 14:35:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:36.987 14:35:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:36.987 14:35:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.987 14:35:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.987 14:35:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.987 ************************************ 00:06:36.987 START TEST locking_app_on_unlocked_coremask 00:06:36.987 ************************************ 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1415196 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1415196 /var/tmp/spdk.sock 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1415196 ']' 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.987 14:35:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.987 [2024-07-12 14:35:13.735951] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:36.987 [2024-07-12 14:35:13.736025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415196 ] 00:06:36.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.247 [2024-07-12 14:35:13.827281] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.247 [2024-07-12 14:35:13.827313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.247 [2024-07-12 14:35:13.914587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1415360 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1415360 /var/tmp/spdk2.sock 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1415360 ']' 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.816 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.817 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.817 14:35:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.075 [2024-07-12 14:35:14.617736] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:38.075 [2024-07-12 14:35:14.617807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415360 ] 00:06:38.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.075 [2024-07-12 14:35:14.716604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.334 [2024-07-12 14:35:14.883810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.900 14:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.900 14:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:38.900 14:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1415360 00:06:38.900 14:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1415360 00:06:38.901 14:35:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.835 lslocks: write error 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1415196 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1415196 ']' 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1415196 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1415196 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1415196' 00:06:39.835 killing process with pid 1415196 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1415196 00:06:39.835 14:35:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1415196 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1415360 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1415360 ']' 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1415360 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1415360 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1415360' 00:06:40.771 killing process with pid 1415360 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1415360 00:06:40.771 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1415360 00:06:41.030 00:06:41.030 real 0m3.898s 00:06:41.030 user 0m4.098s 00:06:41.030 sys 0m1.305s 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.030 ************************************ 00:06:41.030 END TEST locking_app_on_unlocked_coremask 00:06:41.030 ************************************ 00:06:41.030 14:35:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.030 14:35:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.030 14:35:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.030 14:35:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.030 14:35:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.030 ************************************ 00:06:41.030 START TEST locking_app_on_locked_coremask 00:06:41.030 ************************************ 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1415759 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1415759 /var/tmp/spdk.sock 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1415759 ']' 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.030 14:35:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.030 [2024-07-12 14:35:17.719878] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:41.030 [2024-07-12 14:35:17.719962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415759 ] 00:06:41.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.030 [2024-07-12 14:35:17.810484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.289 [2024-07-12 14:35:17.898215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1415889 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1415889 /var/tmp/spdk2.sock 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1415889 /var/tmp/spdk2.sock 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1415889 /var/tmp/spdk2.sock 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1415889 ']' 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.926 14:35:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.926 [2024-07-12 14:35:18.581988] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:41.926 [2024-07-12 14:35:18.582049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415889 ] 00:06:41.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.926 [2024-07-12 14:35:18.681333] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1415759 has claimed it. 00:06:41.926 [2024-07-12 14:35:18.681376] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.493 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1415889) - No such process 00:06:42.493 ERROR: process (pid: 1415889) is no longer running 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1415759 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1415759 00:06:42.493 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.059 lslocks: write error 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1415759 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1415759 ']' 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1415759 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1415759 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1415759' 00:06:43.059 killing process with pid 1415759 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1415759 00:06:43.059 14:35:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1415759 00:06:43.627 00:06:43.627 real 0m2.486s 00:06:43.627 user 0m2.677s 00:06:43.627 sys 0m0.763s 00:06:43.627 14:35:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.627 14:35:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.627 ************************************ 00:06:43.627 END TEST locking_app_on_locked_coremask 00:06:43.627 ************************************ 00:06:43.627 14:35:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.627 14:35:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:43.627 14:35:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.627 14:35:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.627 14:35:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.627 ************************************ 00:06:43.627 START TEST locking_overlapped_coremask 00:06:43.627 ************************************ 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1416149 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1416149 /var/tmp/spdk.sock 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1416149 ']' 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.627 14:35:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.627 [2024-07-12 14:35:20.290588] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:43.627 [2024-07-12 14:35:20.290657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416149 ] 00:06:43.627 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.627 [2024-07-12 14:35:20.380857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.886 [2024-07-12 14:35:20.474656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.886 [2024-07-12 14:35:20.474757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.886 [2024-07-12 14:35:20.474757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1416219 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1416219 /var/tmp/spdk2.sock 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1416219 /var/tmp/spdk2.sock 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1416219 /var/tmp/spdk2.sock 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1416219 ']' 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.453 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.453 [2024-07-12 14:35:21.165654] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:44.453 [2024-07-12 14:35:21.165719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416219 ] 00:06:44.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.712 [2024-07-12 14:35:21.271836] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1416149 has claimed it. 00:06:44.712 [2024-07-12 14:35:21.271877] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.280 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1416219) - No such process 00:06:45.280 ERROR: process (pid: 1416219) is no longer running 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1416149 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1416149 ']' 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1416149 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416149 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416149' 00:06:45.280 killing process with pid 1416149 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1416149 00:06:45.280 14:35:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1416149 00:06:45.539 00:06:45.539 real 0m1.927s 00:06:45.539 user 0m5.327s 00:06:45.539 sys 0m0.503s 00:06:45.539 14:35:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.539 14:35:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.539 ************************************ 00:06:45.539 END TEST locking_overlapped_coremask 00:06:45.539 ************************************ 00:06:45.539 14:35:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.539 14:35:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:45.539 14:35:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.539 14:35:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.539 14:35:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.539 ************************************ 00:06:45.539 START TEST locking_overlapped_coremask_via_rpc 00:06:45.539 ************************************ 00:06:45.539 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:45.539 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1416374 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1416374 /var/tmp/spdk.sock 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1416374 ']' 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.540 14:35:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.540 [2024-07-12 14:35:22.305263] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:45.540 [2024-07-12 14:35:22.305325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416374 ] 00:06:45.798 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.798 [2024-07-12 14:35:22.395174] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.798 [2024-07-12 14:35:22.395206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.798 [2024-07-12 14:35:22.487820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.798 [2024-07-12 14:35:22.487920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.798 [2024-07-12 14:35:22.487921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1416551 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1416551 /var/tmp/spdk2.sock 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1416551 ']' 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.732 14:35:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.732 [2024-07-12 14:35:23.179165] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:46.732 [2024-07-12 14:35:23.179254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416551 ] 00:06:46.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.732 [2024-07-12 14:35:23.280699] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.732 [2024-07-12 14:35:23.280726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.732 [2024-07-12 14:35:23.447664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.733 [2024-07-12 14:35:23.451583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.733 [2024-07-12 14:35:23.451583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.299 [2024-07-12 14:35:24.026594] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1416374 has claimed it. 00:06:47.299 request: 00:06:47.299 { 00:06:47.299 "method": "framework_enable_cpumask_locks", 00:06:47.299 "req_id": 1 00:06:47.299 } 00:06:47.299 Got JSON-RPC error response 00:06:47.299 response: 00:06:47.299 { 00:06:47.299 "code": -32603, 00:06:47.299 "message": "Failed to claim CPU core: 2" 00:06:47.299 } 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1416374 /var/tmp/spdk.sock 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1416374 ']' 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.299 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1416551 /var/tmp/spdk2.sock 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1416551 ']' 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.557 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.816 00:06:47.816 real 0m2.138s 00:06:47.816 user 0m0.856s 00:06:47.816 sys 0m0.222s 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.816 14:35:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.816 ************************************ 00:06:47.816 END TEST locking_overlapped_coremask_via_rpc 00:06:47.816 ************************************ 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.816 14:35:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.816 14:35:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1416374 ]] 00:06:47.816 14:35:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1416374 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1416374 ']' 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1416374 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416374 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416374' 00:06:47.816 killing process with pid 1416374 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1416374 00:06:47.816 14:35:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1416374 00:06:48.074 14:35:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1416551 ]] 00:06:48.074 14:35:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1416551 00:06:48.074 14:35:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1416551 ']' 00:06:48.074 14:35:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1416551 00:06:48.074 14:35:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:48.074 14:35:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.332 14:35:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416551 00:06:48.332 14:35:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:48.332 14:35:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:48.332 14:35:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416551' 00:06:48.332 killing process with pid 1416551 00:06:48.332 14:35:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1416551 00:06:48.332 14:35:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1416551 00:06:48.591 14:35:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.591 14:35:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:48.591 14:35:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1416374 ]] 00:06:48.591 14:35:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1416374 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1416374 ']' 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1416374 00:06:48.591 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1416374) - No such process 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1416374 is not found' 00:06:48.591 Process with pid 1416374 is not found 00:06:48.591 14:35:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1416551 ]] 00:06:48.591 14:35:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1416551 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1416551 ']' 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1416551 00:06:48.591 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1416551) - No such process 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1416551 is not found' 00:06:48.591 Process with pid 1416551 is not found 00:06:48.591 14:35:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.591 00:06:48.591 real 0m19.265s 00:06:48.591 user 0m31.368s 00:06:48.591 sys 0m6.377s 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.591 14:35:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.591 ************************************ 00:06:48.591 END TEST cpu_locks 00:06:48.591 ************************************ 00:06:48.591 14:35:25 event -- common/autotest_common.sh@1142 -- # return 0 00:06:48.591 00:06:48.591 real 0m45.279s 00:06:48.591 user 1m22.925s 00:06:48.591 sys 0m10.949s 00:06:48.591 14:35:25 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.591 14:35:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.591 ************************************ 00:06:48.591 END TEST event 00:06:48.591 ************************************ 00:06:48.591 14:35:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.591 14:35:25 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:48.591 14:35:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.591 14:35:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.591 14:35:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.849 ************************************ 00:06:48.849 START TEST thread 00:06:48.849 ************************************ 00:06:48.849 14:35:25 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:48.849 * Looking for test storage... 00:06:48.849 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:48.849 14:35:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.849 14:35:25 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:48.849 14:35:25 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.849 14:35:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.849 ************************************ 00:06:48.849 START TEST thread_poller_perf 00:06:48.849 ************************************ 00:06:48.849 14:35:25 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.849 [2024-07-12 14:35:25.554617] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:48.849 [2024-07-12 14:35:25.554711] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416994 ] 00:06:48.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.106 [2024-07-12 14:35:25.646397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.106 [2024-07-12 14:35:25.730985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.106 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:50.038 ====================================== 00:06:50.038 busy:2304784246 (cyc) 00:06:50.038 total_run_count: 846000 00:06:50.038 tsc_hz: 2300000000 (cyc) 00:06:50.038 ====================================== 00:06:50.038 poller_cost: 2724 (cyc), 1184 (nsec) 00:06:50.038 00:06:50.038 real 0m1.275s 00:06:50.038 user 0m1.167s 00:06:50.038 sys 0m0.103s 00:06:50.038 14:35:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.038 14:35:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.038 ************************************ 00:06:50.038 END TEST thread_poller_perf 00:06:50.038 ************************************ 00:06:50.295 14:35:26 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:50.295 14:35:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:50.295 14:35:26 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:50.295 14:35:26 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.295 14:35:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.295 ************************************ 00:06:50.295 START TEST thread_poller_perf 00:06:50.295 ************************************ 00:06:50.295 14:35:26 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:50.295 [2024-07-12 14:35:26.915535] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:50.295 [2024-07-12 14:35:26.915622] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417187 ] 00:06:50.295 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.295 [2024-07-12 14:35:27.008478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.554 [2024-07-12 14:35:27.093972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.554 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.490 ====================================== 00:06:51.490 busy:2301313398 (cyc) 00:06:51.490 total_run_count: 14140000 00:06:51.490 tsc_hz: 2300000000 (cyc) 00:06:51.490 ====================================== 00:06:51.490 poller_cost: 162 (cyc), 70 (nsec) 00:06:51.490 00:06:51.490 real 0m1.270s 00:06:51.490 user 0m1.152s 00:06:51.490 sys 0m0.113s 00:06:51.490 14:35:28 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.490 14:35:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.490 ************************************ 00:06:51.490 END TEST thread_poller_perf 00:06:51.490 ************************************ 00:06:51.490 14:35:28 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:51.490 14:35:28 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:51.490 14:35:28 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:51.490 14:35:28 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.490 14:35:28 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.490 14:35:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.490 ************************************ 00:06:51.490 START TEST thread_spdk_lock 00:06:51.490 ************************************ 00:06:51.490 14:35:28 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:51.490 [2024-07-12 14:35:28.261499] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:51.490 [2024-07-12 14:35:28.261595] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417388 ] 00:06:51.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.748 [2024-07-12 14:35:28.351740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.748 [2024-07-12 14:35:28.442104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.748 [2024-07-12 14:35:28.442104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.315 [2024-07-12 14:35:28.928754] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:52.315 [2024-07-12 14:35:28.928788] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:52.315 [2024-07-12 14:35:28.928814] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14cdec0 00:06:52.315 [2024-07-12 14:35:28.929642] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:52.315 [2024-07-12 14:35:28.929746] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:52.315 [2024-07-12 14:35:28.929764] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:52.315 Starting test contend 00:06:52.315 Worker Delay Wait us Hold us Total us 00:06:52.315 0 3 178364 184629 362994 00:06:52.315 1 5 94131 284741 378873 00:06:52.315 PASS test contend 00:06:52.315 Starting test hold_by_poller 00:06:52.315 PASS test hold_by_poller 00:06:52.315 Starting test hold_by_message 00:06:52.315 PASS test hold_by_message 00:06:52.315 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:52.315 100014 assertions passed 00:06:52.315 0 assertions failed 00:06:52.315 00:06:52.315 real 0m0.755s 00:06:52.315 user 0m1.138s 00:06:52.315 sys 0m0.101s 00:06:52.315 14:35:29 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.315 14:35:29 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:52.315 ************************************ 00:06:52.315 END TEST thread_spdk_lock 00:06:52.315 ************************************ 00:06:52.315 14:35:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:52.315 00:06:52.315 real 0m3.660s 00:06:52.315 user 0m3.605s 00:06:52.315 sys 0m0.558s 00:06:52.315 14:35:29 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.315 14:35:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.315 ************************************ 00:06:52.315 END TEST thread 00:06:52.315 ************************************ 00:06:52.315 14:35:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.315 14:35:29 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:52.315 14:35:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.316 14:35:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.316 14:35:29 -- common/autotest_common.sh@10 -- # set +x 00:06:52.574 ************************************ 00:06:52.574 START TEST accel 00:06:52.574 ************************************ 00:06:52.574 14:35:29 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:52.574 * Looking for test storage... 00:06:52.574 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:52.574 14:35:29 accel -- accel/accel.sh@95 -- # declare -A expected_opcs 00:06:52.574 14:35:29 accel -- accel/accel.sh@96 -- # get_expected_opcs 00:06:52.574 14:35:29 accel -- accel/accel.sh@69 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.574 14:35:29 accel -- accel/accel.sh@71 -- # spdk_tgt_pid=1417486 00:06:52.574 14:35:29 accel -- accel/accel.sh@72 -- # waitforlisten 1417486 00:06:52.574 14:35:29 accel -- common/autotest_common.sh@829 -- # '[' -z 1417486 ']' 00:06:52.574 14:35:29 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.574 14:35:29 accel -- accel/accel.sh@70 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:52.574 14:35:29 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.574 14:35:29 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.574 14:35:29 accel -- accel/accel.sh@70 -- # build_accel_config 00:06:52.574 14:35:29 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.574 14:35:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.574 14:35:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.574 14:35:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.574 14:35:29 accel -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:52.574 14:35:29 accel -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:52.574 14:35:29 accel -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:52.574 14:35:29 accel -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:52.574 14:35:29 accel -- accel/accel.sh@49 -- # local IFS=, 00:06:52.574 14:35:29 accel -- accel/accel.sh@50 -- # jq -r . 00:06:52.574 [2024-07-12 14:35:29.276818] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:52.574 [2024-07-12 14:35:29.276894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417486 ] 00:06:52.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.834 [2024-07-12 14:35:29.365019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.834 [2024-07-12 14:35:29.452750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.402 14:35:30 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.402 14:35:30 accel -- common/autotest_common.sh@862 -- # return 0 00:06:53.402 14:35:30 accel -- accel/accel.sh@74 -- # [[ 0 -gt 0 ]] 00:06:53.402 14:35:30 accel -- accel/accel.sh@77 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:53.402 14:35:30 accel -- accel/accel.sh@78 -- # [[ 0 -gt 0 ]] 00:06:53.402 14:35:30 accel -- accel/accel.sh@81 -- # [[ 0 -gt 0 ]] 00:06:53.402 14:35:30 accel -- accel/accel.sh@82 -- # [[ -n '' ]] 00:06:53.402 14:35:30 accel -- accel/accel.sh@84 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:53.402 14:35:30 accel -- accel/accel.sh@84 -- # rpc_cmd accel_get_opc_assignments 00:06:53.402 14:35:30 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.402 14:35:30 accel -- accel/accel.sh@84 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:53.402 14:35:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.402 14:35:30 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.402 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.402 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.402 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.403 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.403 14:35:30 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.403 14:35:30 accel -- accel/accel.sh@86 -- # IFS== 00:06:53.403 14:35:30 accel -- accel/accel.sh@86 -- # read -r opc module 00:06:53.403 14:35:30 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:06:53.403 14:35:30 accel -- accel/accel.sh@89 -- # killprocess 1417486 00:06:53.403 14:35:30 accel -- common/autotest_common.sh@948 -- # '[' -z 1417486 ']' 00:06:53.403 14:35:30 accel -- common/autotest_common.sh@952 -- # kill -0 1417486 00:06:53.403 14:35:30 accel -- common/autotest_common.sh@953 -- # uname 00:06:53.403 14:35:30 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.403 14:35:30 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1417486 00:06:53.660 14:35:30 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.660 14:35:30 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.660 14:35:30 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1417486' 00:06:53.660 killing process with pid 1417486 00:06:53.660 14:35:30 accel -- common/autotest_common.sh@967 -- # kill 1417486 00:06:53.660 14:35:30 accel -- common/autotest_common.sh@972 -- # wait 1417486 00:06:53.921 14:35:30 accel -- accel/accel.sh@90 -- # trap - ERR 00:06:53.921 14:35:30 accel -- accel/accel.sh@103 -- # run_test accel_help accel_perf -h 00:06:53.921 14:35:30 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.921 14:35:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.921 14:35:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.921 14:35:30 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@49 -- # local IFS=, 00:06:53.921 14:35:30 accel.accel_help -- accel/accel.sh@50 -- # jq -r . 00:06:53.921 14:35:30 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.921 14:35:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:53.921 14:35:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.921 14:35:30 accel -- accel/accel.sh@105 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:53.921 14:35:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:53.921 14:35:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.921 14:35:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.921 ************************************ 00:06:53.921 START TEST accel_missing_filename 00:06:53.921 ************************************ 00:06:53.921 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:53.921 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:53.921 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:53.921 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:54.180 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.180 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:54.180 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.180 14:35:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@49 -- # local IFS=, 00:06:54.180 14:35:30 accel.accel_missing_filename -- accel/accel.sh@50 -- # jq -r . 00:06:54.180 [2024-07-12 14:35:30.731304] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:54.180 [2024-07-12 14:35:30.731388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417759 ] 00:06:54.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.180 [2024-07-12 14:35:30.825177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.180 [2024-07-12 14:35:30.920208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.180 [2024-07-12 14:35:30.966720] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.439 [2024-07-12 14:35:31.035863] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:54.439 A filename is required. 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.439 00:06:54.439 real 0m0.406s 00:06:54.439 user 0m0.278s 00:06:54.439 sys 0m0.167s 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.439 14:35:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:54.439 ************************************ 00:06:54.439 END TEST accel_missing_filename 00:06:54.439 ************************************ 00:06:54.439 14:35:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.439 14:35:31 accel -- accel/accel.sh@107 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:54.439 14:35:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:54.439 14:35:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.439 14:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.439 ************************************ 00:06:54.439 START TEST accel_compress_verify 00:06:54.439 ************************************ 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.439 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@49 -- # local IFS=, 00:06:54.439 14:35:31 accel.accel_compress_verify -- accel/accel.sh@50 -- # jq -r . 00:06:54.439 [2024-07-12 14:35:31.222720] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:54.439 [2024-07-12 14:35:31.222814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417862 ] 00:06:54.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.698 [2024-07-12 14:35:31.313681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.698 [2024-07-12 14:35:31.397456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.698 [2024-07-12 14:35:31.443360] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.956 [2024-07-12 14:35:31.512661] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:54.956 00:06:54.956 Compression does not support the verify option, aborting. 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.956 00:06:54.956 real 0m0.390s 00:06:54.956 user 0m0.270s 00:06:54.956 sys 0m0.156s 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.956 14:35:31 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:54.956 ************************************ 00:06:54.956 END TEST accel_compress_verify 00:06:54.956 ************************************ 00:06:54.956 14:35:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.956 14:35:31 accel -- accel/accel.sh@109 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:54.956 14:35:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.956 14:35:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.956 14:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.956 ************************************ 00:06:54.956 START TEST accel_wrong_workload 00:06:54.956 ************************************ 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.956 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@49 -- # local IFS=, 00:06:54.956 14:35:31 accel.accel_wrong_workload -- accel/accel.sh@50 -- # jq -r . 00:06:54.956 Unsupported workload type: foobar 00:06:54.956 [2024-07-12 14:35:31.694315] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:54.956 accel_perf options: 00:06:54.956 [-h help message] 00:06:54.956 [-q queue depth per core] 00:06:54.956 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:54.956 [-T number of threads per core 00:06:54.956 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:54.956 [-t time in seconds] 00:06:54.956 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:54.956 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:54.956 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:54.956 [-l for compress/decompress workloads, name of uncompressed input file 00:06:54.956 [-S for crc32c workload, use this seed value (default 0) 00:06:54.957 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:54.957 [-f for fill workload, use this BYTE value (default 255) 00:06:54.957 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:54.957 [-y verify result if this switch is on] 00:06:54.957 [-a tasks to allocate per core (default: same value as -q)] 00:06:54.957 Can be used to spread operations across a wider range of memory. 00:06:54.957 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:54.957 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.957 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.957 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.957 00:06:54.957 real 0m0.029s 00:06:54.957 user 0m0.016s 00:06:54.957 sys 0m0.013s 00:06:54.957 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.957 14:35:31 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:54.957 ************************************ 00:06:54.957 END TEST accel_wrong_workload 00:06:54.957 ************************************ 00:06:54.957 Error: writing output failed: Broken pipe 00:06:54.957 14:35:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.957 14:35:31 accel -- accel/accel.sh@111 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:54.957 14:35:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:54.957 14:35:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.957 14:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.215 ************************************ 00:06:55.215 START TEST accel_negative_buffers 00:06:55.215 ************************************ 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@49 -- # local IFS=, 00:06:55.215 14:35:31 accel.accel_negative_buffers -- accel/accel.sh@50 -- # jq -r . 00:06:55.215 -x option must be non-negative. 00:06:55.215 [2024-07-12 14:35:31.806939] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:55.215 accel_perf options: 00:06:55.215 [-h help message] 00:06:55.215 [-q queue depth per core] 00:06:55.215 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:55.215 [-T number of threads per core 00:06:55.215 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:55.215 [-t time in seconds] 00:06:55.215 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:55.215 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:55.215 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:55.215 [-l for compress/decompress workloads, name of uncompressed input file 00:06:55.215 [-S for crc32c workload, use this seed value (default 0) 00:06:55.215 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:55.215 [-f for fill workload, use this BYTE value (default 255) 00:06:55.215 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:55.215 [-y verify result if this switch is on] 00:06:55.215 [-a tasks to allocate per core (default: same value as -q)] 00:06:55.215 Can be used to spread operations across a wider range of memory. 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.215 00:06:55.215 real 0m0.029s 00:06:55.215 user 0m0.014s 00:06:55.215 sys 0m0.015s 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.215 14:35:31 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:55.215 ************************************ 00:06:55.215 END TEST accel_negative_buffers 00:06:55.215 ************************************ 00:06:55.215 Error: writing output failed: Broken pipe 00:06:55.215 14:35:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.215 14:35:31 accel -- accel/accel.sh@115 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:55.215 14:35:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:55.215 14:35:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.215 14:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.215 ************************************ 00:06:55.215 START TEST accel_crc32c 00:06:55.215 ************************************ 00:06:55.215 14:35:31 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@49 -- # local IFS=, 00:06:55.215 14:35:31 accel.accel_crc32c -- accel/accel.sh@50 -- # jq -r . 00:06:55.215 [2024-07-12 14:35:31.917288] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:55.215 [2024-07-12 14:35:31.917375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417933 ] 00:06:55.215 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.216 [2024-07-12 14:35:31.989460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.474 [2024-07-12 14:35:32.076258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.474 14:35:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:56.851 14:35:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.851 00:06:56.851 real 0m1.368s 00:06:56.851 user 0m1.240s 00:06:56.851 sys 0m0.141s 00:06:56.851 14:35:33 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.851 14:35:33 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:56.851 ************************************ 00:06:56.851 END TEST accel_crc32c 00:06:56.851 ************************************ 00:06:56.851 14:35:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.851 14:35:33 accel -- accel/accel.sh@116 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:56.851 14:35:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.851 14:35:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.851 14:35:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.851 ************************************ 00:06:56.851 START TEST accel_crc32c_C2 00:06:56.851 ************************************ 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@49 -- # local IFS=, 00:06:56.851 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@50 -- # jq -r . 00:06:56.851 [2024-07-12 14:35:33.370910] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:56.852 [2024-07-12 14:35:33.370994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418167 ] 00:06:56.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.852 [2024-07-12 14:35:33.459435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.852 [2024-07-12 14:35:33.543667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.852 14:35:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.230 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.231 00:06:58.231 real 0m1.396s 00:06:58.231 user 0m1.254s 00:06:58.231 sys 0m0.156s 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.231 14:35:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:58.231 ************************************ 00:06:58.231 END TEST accel_crc32c_C2 00:06:58.231 ************************************ 00:06:58.231 14:35:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.231 14:35:34 accel -- accel/accel.sh@117 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:58.231 14:35:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.231 14:35:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.231 14:35:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.231 ************************************ 00:06:58.231 START TEST accel_copy 00:06:58.231 ************************************ 00:06:58.231 14:35:34 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@49 -- # local IFS=, 00:06:58.231 14:35:34 accel.accel_copy -- accel/accel.sh@50 -- # jq -r . 00:06:58.231 [2024-07-12 14:35:34.853020] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:58.231 [2024-07-12 14:35:34.853104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418412 ] 00:06:58.231 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.231 [2024-07-12 14:35:34.942957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.490 [2024-07-12 14:35:35.028122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.490 14:35:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:59.471 14:35:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.471 00:06:59.471 real 0m1.398s 00:06:59.471 user 0m1.256s 00:06:59.471 sys 0m0.155s 00:06:59.471 14:35:36 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.471 14:35:36 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.471 ************************************ 00:06:59.471 END TEST accel_copy 00:06:59.471 ************************************ 00:06:59.763 14:35:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.763 14:35:36 accel -- accel/accel.sh@118 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.763 14:35:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:59.763 14:35:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.763 14:35:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.763 ************************************ 00:06:59.763 START TEST accel_fill 00:06:59.763 ************************************ 00:06:59.763 14:35:36 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@49 -- # local IFS=, 00:06:59.763 14:35:36 accel.accel_fill -- accel/accel.sh@50 -- # jq -r . 00:06:59.763 [2024-07-12 14:35:36.336997] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:06:59.763 [2024-07-12 14:35:36.337085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418672 ] 00:06:59.763 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.763 [2024-07-12 14:35:36.426675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.763 [2024-07-12 14:35:36.517783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:00.023 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.024 14:35:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:00.961 14:35:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.961 00:07:00.961 real 0m1.404s 00:07:00.961 user 0m1.249s 00:07:00.961 sys 0m0.168s 00:07:00.961 14:35:37 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.961 14:35:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:00.961 ************************************ 00:07:00.961 END TEST accel_fill 00:07:00.961 ************************************ 00:07:01.220 14:35:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.220 14:35:37 accel -- accel/accel.sh@119 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:01.220 14:35:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.220 14:35:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.220 14:35:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.220 ************************************ 00:07:01.220 START TEST accel_copy_crc32c 00:07:01.220 ************************************ 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@49 -- # local IFS=, 00:07:01.220 14:35:37 accel.accel_copy_crc32c -- accel/accel.sh@50 -- # jq -r . 00:07:01.220 [2024-07-12 14:35:37.824421] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:01.220 [2024-07-12 14:35:37.824494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418871 ] 00:07:01.220 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.220 [2024-07-12 14:35:37.917681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.480 [2024-07-12 14:35:38.014845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.480 14:35:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.858 00:07:02.858 real 0m1.414s 00:07:02.858 user 0m1.264s 00:07:02.858 sys 0m0.165s 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.858 14:35:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:02.858 ************************************ 00:07:02.858 END TEST accel_copy_crc32c 00:07:02.858 ************************************ 00:07:02.858 14:35:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.858 14:35:39 accel -- accel/accel.sh@120 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:02.858 14:35:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.858 14:35:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.858 14:35:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.858 ************************************ 00:07:02.858 START TEST accel_copy_crc32c_C2 00:07:02.858 ************************************ 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@49 -- # local IFS=, 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@50 -- # jq -r . 00:07:02.858 [2024-07-12 14:35:39.321019] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:02.858 [2024-07-12 14:35:39.321106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419069 ] 00:07:02.858 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.858 [2024-07-12 14:35:39.412837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.858 [2024-07-12 14:35:39.495212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.858 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.859 14:35:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.236 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.237 00:07:04.237 real 0m1.385s 00:07:04.237 user 0m1.239s 00:07:04.237 sys 0m0.159s 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.237 14:35:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:04.237 ************************************ 00:07:04.237 END TEST accel_copy_crc32c_C2 00:07:04.237 ************************************ 00:07:04.237 14:35:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.237 14:35:40 accel -- accel/accel.sh@121 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:04.237 14:35:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.237 14:35:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.237 14:35:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.237 ************************************ 00:07:04.237 START TEST accel_dualcast 00:07:04.237 ************************************ 00:07:04.237 14:35:40 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@49 -- # local IFS=, 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@50 -- # jq -r . 00:07:04.237 [2024-07-12 14:35:40.788450] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:04.237 [2024-07-12 14:35:40.788542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419269 ] 00:07:04.237 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.237 [2024-07-12 14:35:40.864374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.237 [2024-07-12 14:35:40.947586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.237 14:35:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:05.615 14:35:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.615 00:07:05.615 real 0m1.381s 00:07:05.615 user 0m1.247s 00:07:05.615 sys 0m0.146s 00:07:05.615 14:35:42 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.615 14:35:42 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:05.615 ************************************ 00:07:05.615 END TEST accel_dualcast 00:07:05.615 ************************************ 00:07:05.615 14:35:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.615 14:35:42 accel -- accel/accel.sh@122 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:05.615 14:35:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.615 14:35:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.615 14:35:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.615 ************************************ 00:07:05.615 START TEST accel_compare 00:07:05.615 ************************************ 00:07:05.615 14:35:42 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@49 -- # local IFS=, 00:07:05.615 14:35:42 accel.accel_compare -- accel/accel.sh@50 -- # jq -r . 00:07:05.615 [2024-07-12 14:35:42.251274] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:05.615 [2024-07-12 14:35:42.251358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419460 ] 00:07:05.615 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.615 [2024-07-12 14:35:42.338974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.875 [2024-07-12 14:35:42.423652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.875 14:35:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.876 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.876 14:35:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:07.253 14:35:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.253 00:07:07.253 real 0m1.393s 00:07:07.253 user 0m1.248s 00:07:07.253 sys 0m0.158s 00:07:07.253 14:35:43 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.253 14:35:43 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:07.253 ************************************ 00:07:07.253 END TEST accel_compare 00:07:07.253 ************************************ 00:07:07.253 14:35:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.253 14:35:43 accel -- accel/accel.sh@123 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:07.253 14:35:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.253 14:35:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.253 14:35:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.253 ************************************ 00:07:07.253 START TEST accel_xor 00:07:07.253 ************************************ 00:07:07.253 14:35:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@49 -- # local IFS=, 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@50 -- # jq -r . 00:07:07.253 [2024-07-12 14:35:43.725089] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:07.253 [2024-07-12 14:35:43.725173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419662 ] 00:07:07.253 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.253 [2024-07-12 14:35:43.811864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.253 [2024-07-12 14:35:43.894274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.253 14:35:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.634 00:07:08.634 real 0m1.389s 00:07:08.634 user 0m1.244s 00:07:08.634 sys 0m0.158s 00:07:08.634 14:35:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.634 14:35:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:08.634 ************************************ 00:07:08.634 END TEST accel_xor 00:07:08.634 ************************************ 00:07:08.634 14:35:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.634 14:35:45 accel -- accel/accel.sh@124 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:08.634 14:35:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:08.634 14:35:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.634 14:35:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.634 ************************************ 00:07:08.634 START TEST accel_xor 00:07:08.634 ************************************ 00:07:08.634 14:35:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@49 -- # local IFS=, 00:07:08.634 14:35:45 accel.accel_xor -- accel/accel.sh@50 -- # jq -r . 00:07:08.634 [2024-07-12 14:35:45.201653] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:08.634 [2024-07-12 14:35:45.201736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419853 ] 00:07:08.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.634 [2024-07-12 14:35:45.292615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.634 [2024-07-12 14:35:45.379264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.893 14:35:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:09.829 14:35:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.829 00:07:09.829 real 0m1.402s 00:07:09.829 user 0m1.263s 00:07:09.829 sys 0m0.152s 00:07:09.829 14:35:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.829 14:35:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:09.829 ************************************ 00:07:09.829 END TEST accel_xor 00:07:09.829 ************************************ 00:07:10.088 14:35:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.088 14:35:46 accel -- accel/accel.sh@125 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:10.088 14:35:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:10.088 14:35:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.088 14:35:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.088 ************************************ 00:07:10.088 START TEST accel_dif_verify 00:07:10.088 ************************************ 00:07:10.088 14:35:46 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@49 -- # local IFS=, 00:07:10.088 14:35:46 accel.accel_dif_verify -- accel/accel.sh@50 -- # jq -r . 00:07:10.088 [2024-07-12 14:35:46.683561] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:10.088 [2024-07-12 14:35:46.683644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420057 ] 00:07:10.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.088 [2024-07-12 14:35:46.774892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.088 [2024-07-12 14:35:46.854992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.347 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:10.348 14:35:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:11.284 14:35:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.284 00:07:11.284 real 0m1.389s 00:07:11.284 user 0m1.253s 00:07:11.284 sys 0m0.151s 00:07:11.284 14:35:48 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.284 14:35:48 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:11.284 ************************************ 00:07:11.284 END TEST accel_dif_verify 00:07:11.284 ************************************ 00:07:11.544 14:35:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.544 14:35:48 accel -- accel/accel.sh@126 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:11.544 14:35:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:11.544 14:35:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.544 14:35:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.544 ************************************ 00:07:11.544 START TEST accel_dif_generate 00:07:11.544 ************************************ 00:07:11.544 14:35:48 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@49 -- # local IFS=, 00:07:11.544 14:35:48 accel.accel_dif_generate -- accel/accel.sh@50 -- # jq -r . 00:07:11.544 [2024-07-12 14:35:48.152662] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:11.544 [2024-07-12 14:35:48.152727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420252 ] 00:07:11.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.544 [2024-07-12 14:35:48.240284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.544 [2024-07-12 14:35:48.329429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.804 14:35:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.000 14:35:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.000 14:35:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.000 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.000 14:35:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.000 14:35:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.000 14:35:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:13.000 14:35:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.000 00:07:13.000 real 0m1.398s 00:07:13.000 user 0m1.245s 00:07:13.000 sys 0m0.165s 00:07:13.000 14:35:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.000 14:35:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:13.000 ************************************ 00:07:13.000 END TEST accel_dif_generate 00:07:13.000 ************************************ 00:07:13.000 14:35:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.000 14:35:49 accel -- accel/accel.sh@127 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:13.000 14:35:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:13.000 14:35:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.000 14:35:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.000 ************************************ 00:07:13.000 START TEST accel_dif_generate_copy 00:07:13.000 ************************************ 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@49 -- # local IFS=, 00:07:13.000 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@50 -- # jq -r . 00:07:13.000 [2024-07-12 14:35:49.635217] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:13.000 [2024-07-12 14:35:49.635302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420460 ] 00:07:13.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.000 [2024-07-12 14:35:49.722190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.259 [2024-07-12 14:35:49.806850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.259 14:35:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.637 14:35:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.637 14:35:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.637 14:35:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.637 14:35:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.637 14:35:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.637 14:35:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.637 14:35:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.637 00:07:14.637 real 0m1.392s 00:07:14.637 user 0m1.252s 00:07:14.637 sys 0m0.154s 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.637 14:35:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.637 ************************************ 00:07:14.637 END TEST accel_dif_generate_copy 00:07:14.637 ************************************ 00:07:14.637 14:35:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.637 14:35:51 accel -- accel/accel.sh@129 -- # [[ y == y ]] 00:07:14.637 14:35:51 accel -- accel/accel.sh@130 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.637 14:35:51 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:14.637 14:35:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.637 14:35:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.637 ************************************ 00:07:14.637 START TEST accel_comp 00:07:14.637 ************************************ 00:07:14.637 14:35:51 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.637 14:35:51 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:14.637 14:35:51 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:14.637 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.637 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@49 -- # local IFS=, 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@50 -- # jq -r . 00:07:14.638 [2024-07-12 14:35:51.115167] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:14.638 [2024-07-12 14:35:51.115253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420708 ] 00:07:14.638 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.638 [2024-07-12 14:35:51.192974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.638 [2024-07-12 14:35:51.277582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.638 14:35:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:16.015 14:35:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.015 00:07:16.015 real 0m1.387s 00:07:16.015 user 0m1.261s 00:07:16.015 sys 0m0.141s 00:07:16.015 14:35:52 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.015 14:35:52 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:16.015 ************************************ 00:07:16.015 END TEST accel_comp 00:07:16.015 ************************************ 00:07:16.015 14:35:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.015 14:35:52 accel -- accel/accel.sh@131 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:16.015 14:35:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.015 14:35:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.015 14:35:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.015 ************************************ 00:07:16.015 START TEST accel_decomp 00:07:16.015 ************************************ 00:07:16.015 14:35:52 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@49 -- # local IFS=, 00:07:16.015 14:35:52 accel.accel_decomp -- accel/accel.sh@50 -- # jq -r . 00:07:16.015 [2024-07-12 14:35:52.587984] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:16.015 [2024-07-12 14:35:52.588069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420949 ] 00:07:16.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.015 [2024-07-12 14:35:52.677095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.015 [2024-07-12 14:35:52.760395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.273 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.274 14:35:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.221 14:35:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.222 14:35:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.222 00:07:17.222 real 0m1.397s 00:07:17.222 user 0m1.255s 00:07:17.222 sys 0m0.157s 00:07:17.222 14:35:53 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.222 14:35:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:17.222 ************************************ 00:07:17.222 END TEST accel_decomp 00:07:17.222 ************************************ 00:07:17.481 14:35:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.481 14:35:54 accel -- accel/accel.sh@132 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:17.481 14:35:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:17.481 14:35:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.481 14:35:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.481 ************************************ 00:07:17.481 START TEST accel_decomp_full 00:07:17.481 ************************************ 00:07:17.481 14:35:54 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:17.481 14:35:54 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.482 14:35:54 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.482 14:35:54 accel.accel_decomp_full -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:17.482 14:35:54 accel.accel_decomp_full -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:17.482 14:35:54 accel.accel_decomp_full -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:17.482 14:35:54 accel.accel_decomp_full -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:17.482 14:35:54 accel.accel_decomp_full -- accel/accel.sh@49 -- # local IFS=, 00:07:17.482 14:35:54 accel.accel_decomp_full -- accel/accel.sh@50 -- # jq -r . 00:07:17.482 [2024-07-12 14:35:54.070335] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:17.482 [2024-07-12 14:35:54.070419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421192 ] 00:07:17.482 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.482 [2024-07-12 14:35:54.160246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.482 [2024-07-12 14:35:54.249748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:17.741 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:17.742 14:35:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.679 14:35:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.679 00:07:18.679 real 0m1.414s 00:07:18.679 user 0m1.264s 00:07:18.679 sys 0m0.164s 00:07:18.679 14:35:55 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.679 14:35:55 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:18.679 ************************************ 00:07:18.679 END TEST accel_decomp_full 00:07:18.679 ************************************ 00:07:18.971 14:35:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.971 14:35:55 accel -- accel/accel.sh@133 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.971 14:35:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:18.971 14:35:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.971 14:35:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.971 ************************************ 00:07:18.971 START TEST accel_decomp_mcore 00:07:18.971 ************************************ 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@49 -- # local IFS=, 00:07:18.971 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@50 -- # jq -r . 00:07:18.971 [2024-07-12 14:35:55.564956] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:18.971 [2024-07-12 14:35:55.565038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421392 ] 00:07:18.971 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.971 [2024-07-12 14:35:55.651604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.971 [2024-07-12 14:35:55.735236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.971 [2024-07-12 14:35:55.735335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.971 [2024-07-12 14:35:55.735376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.971 [2024-07-12 14:35:55.735377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.231 14:35:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.167 00:07:20.167 real 0m1.388s 00:07:20.167 user 0m4.583s 00:07:20.167 sys 0m0.164s 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.167 14:35:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:20.167 ************************************ 00:07:20.167 END TEST accel_decomp_mcore 00:07:20.167 ************************************ 00:07:20.426 14:35:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.426 14:35:56 accel -- accel/accel.sh@134 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.426 14:35:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:20.426 14:35:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.426 14:35:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.426 ************************************ 00:07:20.426 START TEST accel_decomp_full_mcore 00:07:20.426 ************************************ 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@49 -- # local IFS=, 00:07:20.426 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@50 -- # jq -r . 00:07:20.426 [2024-07-12 14:35:57.037422] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:20.426 [2024-07-12 14:35:57.037511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421595 ] 00:07:20.426 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.426 [2024-07-12 14:35:57.124426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.426 [2024-07-12 14:35:57.210664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.426 [2024-07-12 14:35:57.210767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.426 [2024-07-12 14:35:57.210867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.426 [2024-07-12 14:35:57.210868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.686 14:35:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.666 14:35:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.666 00:07:21.666 real 0m1.419s 00:07:21.666 user 0m4.675s 00:07:21.667 sys 0m0.165s 00:07:21.667 14:35:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.667 14:35:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:21.667 ************************************ 00:07:21.667 END TEST accel_decomp_full_mcore 00:07:21.667 ************************************ 00:07:21.926 14:35:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.926 14:35:58 accel -- accel/accel.sh@135 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:21.926 14:35:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:21.926 14:35:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.926 14:35:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.926 ************************************ 00:07:21.926 START TEST accel_decomp_mthread 00:07:21.926 ************************************ 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@49 -- # local IFS=, 00:07:21.926 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@50 -- # jq -r . 00:07:21.926 [2024-07-12 14:35:58.541207] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:21.926 [2024-07-12 14:35:58.541296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421792 ] 00:07:21.926 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.926 [2024-07-12 14:35:58.635768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.185 [2024-07-12 14:35:58.726710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.185 14:35:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.563 00:07:23.563 real 0m1.416s 00:07:23.563 user 0m1.258s 00:07:23.563 sys 0m0.172s 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.563 14:35:59 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:23.563 ************************************ 00:07:23.563 END TEST accel_decomp_mthread 00:07:23.563 ************************************ 00:07:23.563 14:35:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.563 14:35:59 accel -- accel/accel.sh@136 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.563 14:35:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:23.563 14:35:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.563 14:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.563 ************************************ 00:07:23.563 START TEST accel_decomp_full_mthread 00:07:23.563 ************************************ 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@49 -- # local IFS=, 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@50 -- # jq -r . 00:07:23.563 [2024-07-12 14:36:00.040770] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:23.563 [2024-07-12 14:36:00.040837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421999 ] 00:07:23.563 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.563 [2024-07-12 14:36:00.130315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.563 [2024-07-12 14:36:00.219179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.563 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.564 14:36:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.940 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.941 00:07:24.941 real 0m1.428s 00:07:24.941 user 0m1.265s 00:07:24.941 sys 0m0.176s 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.941 14:36:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:24.941 ************************************ 00:07:24.941 END TEST accel_decomp_full_mthread 00:07:24.941 ************************************ 00:07:24.941 14:36:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.941 14:36:01 accel -- accel/accel.sh@138 -- # [[ n == y ]] 00:07:24.941 14:36:01 accel -- accel/accel.sh@150 -- # [[ 0 == 1 ]] 00:07:24.941 14:36:01 accel -- accel/accel.sh@177 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.941 14:36:01 accel -- accel/accel.sh@177 -- # build_accel_config 00:07:24.941 14:36:01 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:24.941 14:36:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.941 14:36:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.941 14:36:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.941 14:36:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.941 14:36:01 accel -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:07:24.941 14:36:01 accel -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:07:24.941 14:36:01 accel -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:07:24.941 14:36:01 accel -- accel/accel.sh@45 -- # [[ -n '' ]] 00:07:24.941 14:36:01 accel -- accel/accel.sh@49 -- # local IFS=, 00:07:24.941 14:36:01 accel -- accel/accel.sh@50 -- # jq -r . 00:07:24.941 ************************************ 00:07:24.941 START TEST accel_dif_functional_tests 00:07:24.941 ************************************ 00:07:24.941 14:36:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.941 [2024-07-12 14:36:01.556325] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:24.941 [2024-07-12 14:36:01.556406] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422290 ] 00:07:24.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.941 [2024-07-12 14:36:01.643840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.200 [2024-07-12 14:36:01.727985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.200 [2024-07-12 14:36:01.728086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.200 [2024-07-12 14:36:01.728086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.200 00:07:25.200 00:07:25.200 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.200 http://cunit.sourceforge.net/ 00:07:25.200 00:07:25.200 00:07:25.200 Suite: accel_dif 00:07:25.200 Test: verify: DIF generated, GUARD check ...passed 00:07:25.200 Test: verify: DIF generated, APPTAG check ...passed 00:07:25.200 Test: verify: DIF generated, REFTAG check ...passed 00:07:25.200 Test: verify: DIF not generated, GUARD check ...[2024-07-12 14:36:01.801215] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.200 passed 00:07:25.200 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 14:36:01.801270] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.200 passed 00:07:25.200 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 14:36:01.801312] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.200 passed 00:07:25.200 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:25.200 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 14:36:01.801362] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:25.200 passed 00:07:25.200 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:25.200 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:25.200 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:25.200 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 14:36:01.801462] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:25.200 passed 00:07:25.200 Test: verify copy: DIF generated, GUARD check ...passed 00:07:25.200 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:25.200 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:25.200 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 14:36:01.801596] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.200 passed 00:07:25.200 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 14:36:01.801624] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.200 passed 00:07:25.200 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 14:36:01.801652] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.200 passed 00:07:25.200 Test: generate copy: DIF generated, GUARD check ...passed 00:07:25.200 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:25.200 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:25.200 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:25.200 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:25.200 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:25.200 Test: generate copy: iovecs-len validate ...[2024-07-12 14:36:01.801821] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:25.200 passed 00:07:25.200 Test: generate copy: buffer alignment validate ...passed 00:07:25.200 00:07:25.200 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.200 suites 1 1 n/a 0 0 00:07:25.200 tests 26 26 26 0 0 00:07:25.200 asserts 115 115 115 0 n/a 00:07:25.200 00:07:25.200 Elapsed time = 0.002 seconds 00:07:25.200 00:07:25.200 real 0m0.435s 00:07:25.200 user 0m0.584s 00:07:25.200 sys 0m0.173s 00:07:25.200 14:36:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.200 14:36:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:25.200 ************************************ 00:07:25.200 END TEST accel_dif_functional_tests 00:07:25.200 ************************************ 00:07:25.460 14:36:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.460 14:36:02 accel -- accel/accel.sh@178 -- # export PCI_ALLOWED= 00:07:25.460 14:36:02 accel -- accel/accel.sh@178 -- # PCI_ALLOWED= 00:07:25.460 00:07:25.460 real 0m32.875s 00:07:25.460 user 0m35.331s 00:07:25.460 sys 0m5.750s 00:07:25.460 14:36:02 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.460 14:36:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 ************************************ 00:07:25.460 END TEST accel 00:07:25.460 ************************************ 00:07:25.460 14:36:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.460 14:36:02 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:25.460 14:36:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.460 14:36:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.460 14:36:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 ************************************ 00:07:25.460 START TEST accel_rpc 00:07:25.460 ************************************ 00:07:25.460 14:36:02 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:25.460 * Looking for test storage... 00:07:25.460 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:25.460 14:36:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.460 14:36:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1422543 00:07:25.460 14:36:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1422543 00:07:25.460 14:36:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:25.460 14:36:02 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1422543 ']' 00:07:25.460 14:36:02 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.460 14:36:02 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.460 14:36:02 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.460 14:36:02 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.460 14:36:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.460 [2024-07-12 14:36:02.232854] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:25.460 [2024-07-12 14:36:02.232924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422543 ] 00:07:25.720 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.720 [2024-07-12 14:36:02.321626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.720 [2024-07-12 14:36:02.405565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:26.657 14:36:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:26.657 14:36:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:26.657 14:36:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:26.657 14:36:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:26.657 14:36:03 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.657 ************************************ 00:07:26.657 START TEST accel_assign_opcode 00:07:26.657 ************************************ 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.657 [2024-07-12 14:36:03.127701] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.657 [2024-07-12 14:36:03.135707] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.657 software 00:07:26.657 00:07:26.657 real 0m0.245s 00:07:26.657 user 0m0.048s 00:07:26.657 sys 0m0.013s 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.657 14:36:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.657 ************************************ 00:07:26.657 END TEST accel_assign_opcode 00:07:26.657 ************************************ 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:26.657 14:36:03 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1422543 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1422543 ']' 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1422543 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.657 14:36:03 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422543 00:07:26.916 14:36:03 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.916 14:36:03 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.916 14:36:03 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422543' 00:07:26.916 killing process with pid 1422543 00:07:26.916 14:36:03 accel_rpc -- common/autotest_common.sh@967 -- # kill 1422543 00:07:26.916 14:36:03 accel_rpc -- common/autotest_common.sh@972 -- # wait 1422543 00:07:27.175 00:07:27.175 real 0m1.703s 00:07:27.175 user 0m1.743s 00:07:27.175 sys 0m0.513s 00:07:27.175 14:36:03 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.175 14:36:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.175 ************************************ 00:07:27.175 END TEST accel_rpc 00:07:27.175 ************************************ 00:07:27.175 14:36:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.175 14:36:03 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.175 14:36:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.175 14:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.175 14:36:03 -- common/autotest_common.sh@10 -- # set +x 00:07:27.175 ************************************ 00:07:27.175 START TEST app_cmdline 00:07:27.175 ************************************ 00:07:27.175 14:36:03 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.434 * Looking for test storage... 00:07:27.434 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:27.434 14:36:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.434 14:36:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1422797 00:07:27.434 14:36:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1422797 00:07:27.434 14:36:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.434 14:36:03 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1422797 ']' 00:07:27.434 14:36:03 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.434 14:36:03 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.434 14:36:03 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.434 14:36:03 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.434 14:36:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.434 [2024-07-12 14:36:04.019405] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:27.434 [2024-07-12 14:36:04.019499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422797 ] 00:07:27.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.434 [2024-07-12 14:36:04.108790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.434 [2024-07-12 14:36:04.198011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.364 14:36:04 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.364 14:36:04 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:28.364 14:36:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.364 { 00:07:28.364 "version": "SPDK v24.09-pre git sha1 2a2ade677", 00:07:28.364 "fields": { 00:07:28.364 "major": 24, 00:07:28.364 "minor": 9, 00:07:28.364 "patch": 0, 00:07:28.364 "suffix": "-pre", 00:07:28.364 "commit": "2a2ade677" 00:07:28.364 } 00:07:28.364 } 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.364 14:36:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.364 14:36:05 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:28.365 14:36:05 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.365 14:36:05 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.623 request: 00:07:28.623 { 00:07:28.623 "method": "env_dpdk_get_mem_stats", 00:07:28.623 "req_id": 1 00:07:28.623 } 00:07:28.623 Got JSON-RPC error response 00:07:28.623 response: 00:07:28.623 { 00:07:28.623 "code": -32601, 00:07:28.623 "message": "Method not found" 00:07:28.623 } 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.623 14:36:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1422797 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1422797 ']' 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1422797 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422797 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422797' 00:07:28.623 killing process with pid 1422797 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@967 -- # kill 1422797 00:07:28.623 14:36:05 app_cmdline -- common/autotest_common.sh@972 -- # wait 1422797 00:07:28.881 00:07:28.881 real 0m1.749s 00:07:28.881 user 0m1.987s 00:07:28.881 sys 0m0.532s 00:07:28.882 14:36:05 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.882 14:36:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.882 ************************************ 00:07:28.882 END TEST app_cmdline 00:07:28.882 ************************************ 00:07:29.140 14:36:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.140 14:36:05 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:29.140 14:36:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.140 14:36:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.140 14:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:29.140 ************************************ 00:07:29.140 START TEST version 00:07:29.141 ************************************ 00:07:29.141 14:36:05 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:29.141 * Looking for test storage... 00:07:29.141 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:29.141 14:36:05 version -- app/version.sh@17 -- # get_header_version major 00:07:29.141 14:36:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # cut -f2 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.141 14:36:05 version -- app/version.sh@17 -- # major=24 00:07:29.141 14:36:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:29.141 14:36:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # cut -f2 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.141 14:36:05 version -- app/version.sh@18 -- # minor=9 00:07:29.141 14:36:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:29.141 14:36:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # cut -f2 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.141 14:36:05 version -- app/version.sh@19 -- # patch=0 00:07:29.141 14:36:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:29.141 14:36:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # cut -f2 00:07:29.141 14:36:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.141 14:36:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:29.141 14:36:05 version -- app/version.sh@22 -- # version=24.9 00:07:29.141 14:36:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:29.141 14:36:05 version -- app/version.sh@28 -- # version=24.9rc0 00:07:29.141 14:36:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.141 14:36:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.141 14:36:05 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:29.141 14:36:05 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:29.141 00:07:29.141 real 0m0.189s 00:07:29.141 user 0m0.091s 00:07:29.141 sys 0m0.146s 00:07:29.141 14:36:05 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.141 14:36:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.141 ************************************ 00:07:29.141 END TEST version 00:07:29.141 ************************************ 00:07:29.400 14:36:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.400 14:36:05 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:29.400 14:36:05 -- spdk/autotest.sh@198 -- # uname -s 00:07:29.400 14:36:05 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:29.400 14:36:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:29.400 14:36:05 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:29.400 14:36:05 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:29.400 14:36:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:29.400 14:36:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:29.400 14:36:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.400 14:36:05 -- common/autotest_common.sh@10 -- # set +x 00:07:29.401 14:36:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:29.401 14:36:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:07:29.401 14:36:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:29.401 14:36:06 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:07:29.401 14:36:06 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:29.401 14:36:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.401 14:36:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.401 14:36:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.401 ************************************ 00:07:29.401 START TEST llvm_fuzz 00:07:29.401 ************************************ 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:29.401 * Looking for test storage... 00:07:29.401 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:29.401 14:36:06 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.401 14:36:06 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.662 ************************************ 00:07:29.662 START TEST nvmf_llvm_fuzz 00:07:29.662 ************************************ 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:29.662 * Looking for test storage... 00:07:29.662 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:29.662 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:29.663 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:29.663 #define SPDK_CONFIG_H 00:07:29.663 #define SPDK_CONFIG_APPS 1 00:07:29.663 #define SPDK_CONFIG_ARCH native 00:07:29.663 #undef SPDK_CONFIG_ASAN 00:07:29.663 #undef SPDK_CONFIG_AVAHI 00:07:29.663 #undef SPDK_CONFIG_CET 00:07:29.663 #define SPDK_CONFIG_COVERAGE 1 00:07:29.663 #define SPDK_CONFIG_CROSS_PREFIX 00:07:29.663 #undef SPDK_CONFIG_CRYPTO 00:07:29.663 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:29.663 #undef SPDK_CONFIG_CUSTOMOCF 00:07:29.663 #undef SPDK_CONFIG_DAOS 00:07:29.663 #define SPDK_CONFIG_DAOS_DIR 00:07:29.663 #define SPDK_CONFIG_DEBUG 1 00:07:29.663 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:29.663 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.664 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:29.664 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:29.664 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:29.664 #undef SPDK_CONFIG_DPDK_UADK 00:07:29.664 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.664 #define SPDK_CONFIG_EXAMPLES 1 00:07:29.664 #undef SPDK_CONFIG_FC 00:07:29.664 #define SPDK_CONFIG_FC_PATH 00:07:29.664 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:29.664 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:29.664 #undef SPDK_CONFIG_FUSE 00:07:29.664 #define SPDK_CONFIG_FUZZER 1 00:07:29.664 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:29.664 #undef SPDK_CONFIG_GOLANG 00:07:29.664 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:29.664 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:29.664 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:29.664 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:29.664 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:29.664 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:29.664 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:29.664 #define SPDK_CONFIG_IDXD 1 00:07:29.664 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:29.664 #undef SPDK_CONFIG_IPSEC_MB 00:07:29.664 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:29.664 #define SPDK_CONFIG_ISAL 1 00:07:29.664 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:29.664 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:29.664 #define SPDK_CONFIG_LIBDIR 00:07:29.664 #undef SPDK_CONFIG_LTO 00:07:29.664 #define SPDK_CONFIG_MAX_LCORES 128 00:07:29.664 #define SPDK_CONFIG_NVME_CUSE 1 00:07:29.664 #undef SPDK_CONFIG_OCF 00:07:29.664 #define SPDK_CONFIG_OCF_PATH 00:07:29.664 #define SPDK_CONFIG_OPENSSL_PATH 00:07:29.664 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:29.664 #define SPDK_CONFIG_PGO_DIR 00:07:29.664 #undef SPDK_CONFIG_PGO_USE 00:07:29.664 #define SPDK_CONFIG_PREFIX /usr/local 00:07:29.664 #undef SPDK_CONFIG_RAID5F 00:07:29.664 #undef SPDK_CONFIG_RBD 00:07:29.664 #define SPDK_CONFIG_RDMA 1 00:07:29.664 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:29.664 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:29.664 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:29.664 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:29.664 #undef SPDK_CONFIG_SHARED 00:07:29.664 #undef SPDK_CONFIG_SMA 00:07:29.664 #define SPDK_CONFIG_TESTS 1 00:07:29.664 #undef SPDK_CONFIG_TSAN 00:07:29.664 #define SPDK_CONFIG_UBLK 1 00:07:29.664 #define SPDK_CONFIG_UBSAN 1 00:07:29.664 #undef SPDK_CONFIG_UNIT_TESTS 00:07:29.664 #undef SPDK_CONFIG_URING 00:07:29.664 #define SPDK_CONFIG_URING_PATH 00:07:29.664 #undef SPDK_CONFIG_URING_ZNS 00:07:29.664 #undef SPDK_CONFIG_USDT 00:07:29.664 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:29.664 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:29.664 #define SPDK_CONFIG_VFIO_USER 1 00:07:29.664 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:29.664 #define SPDK_CONFIG_VHOST 1 00:07:29.664 #define SPDK_CONFIG_VIRTIO 1 00:07:29.664 #undef SPDK_CONFIG_VTUNE 00:07:29.664 #define SPDK_CONFIG_VTUNE_DIR 00:07:29.664 #define SPDK_CONFIG_WERROR 1 00:07:29.664 #define SPDK_CONFIG_WPDK_DIR 00:07:29.664 #undef SPDK_CONFIG_XNVME 00:07:29.664 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:29.664 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.665 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 1423691 ]] 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 1423691 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1707 -- # set_test_storage 2147483648 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.KOtZQu 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.KOtZQu/tests/nvmf /tmp/spdk.KOtZQu 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:07:29.666 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=87342514176 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508576768 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7166062592 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895826944 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5890048 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253942272 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254290432 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=348160 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:29.667 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:29.927 * Looking for test storage... 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=87342514176 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9380655104 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:29.927 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1709 -- # set -o errtrace 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # shopt -s extdebug 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1713 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1714 -- # true 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1716 -- # xtrace_fd 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.927 14:36:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:29.927 [2024-07-12 14:36:06.522238] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:29.927 [2024-07-12 14:36:06.522313] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423737 ] 00:07:29.927 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.185 [2024-07-12 14:36:06.846651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.185 [2024-07-12 14:36:06.939525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.443 [2024-07-12 14:36:06.999296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.443 [2024-07-12 14:36:07.015503] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:30.443 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.443 INFO: Seed: 1674176240 00:07:30.443 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:30.443 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:30.443 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:30.443 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.443 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.443 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.443 This may also happen if the target rejected all inputs we tried so far 00:07:30.443 [2024-07-12 14:36:07.080847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.443 [2024-07-12 14:36:07.080876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.702 NEW_FUNC[1/694]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:30.702 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.702 #20 NEW cov: 11819 ft: 11850 corp: 2/70b lim: 320 exec/s: 0 rss: 72Mb L: 69/69 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:30.702 [2024-07-12 14:36:07.431834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.702 [2024-07-12 14:36:07.431884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.702 NEW_FUNC[1/1]: 0x133fc60 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:07:30.702 #21 NEW cov: 11981 ft: 12597 corp: 3/139b lim: 320 exec/s: 0 rss: 72Mb L: 69/69 MS: 1 CopyPart- 00:07:30.961 [2024-07-12 14:36:07.491804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.961 [2024-07-12 14:36:07.491836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.961 #32 NEW cov: 12004 ft: 12918 corp: 4/209b lim: 320 exec/s: 0 rss: 72Mb L: 70/70 MS: 1 CrossOver- 00:07:30.961 [2024-07-12 14:36:07.531913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.961 [2024-07-12 14:36:07.531942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.961 #33 NEW cov: 12089 ft: 13088 corp: 5/317b lim: 320 exec/s: 0 rss: 72Mb L: 108/108 MS: 1 InsertRepeatedBytes- 00:07:30.961 [2024-07-12 14:36:07.572017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.961 [2024-07-12 14:36:07.572044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.961 #34 NEW cov: 12089 ft: 13211 corp: 6/416b lim: 320 exec/s: 0 rss: 72Mb L: 99/108 MS: 1 CopyPart- 00:07:30.961 [2024-07-12 14:36:07.612157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.961 [2024-07-12 14:36:07.612184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.961 NEW_FUNC[1/1]: 0x17c03f0 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:30.961 #38 NEW cov: 12102 ft: 13601 corp: 7/524b lim: 320 exec/s: 0 rss: 72Mb L: 108/108 MS: 4 ShuffleBytes-ChangeBit-InsertByte-InsertRepeatedBytes- 00:07:30.961 [2024-07-12 14:36:07.652214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.961 [2024-07-12 14:36:07.652240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.961 #39 NEW cov: 12102 ft: 13703 corp: 8/647b lim: 320 exec/s: 0 rss: 72Mb L: 123/123 MS: 1 InsertRepeatedBytes- 00:07:30.961 [2024-07-12 14:36:07.702352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:920000 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.961 [2024-07-12 14:36:07.702378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.961 #40 NEW cov: 12102 ft: 13735 corp: 9/756b lim: 320 exec/s: 0 rss: 72Mb L: 109/123 MS: 1 InsertByte- 00:07:30.961 [2024-07-12 14:36:07.742466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:30.961 [2024-07-12 14:36:07.742494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.219 #41 NEW cov: 12102 ft: 13800 corp: 10/825b lim: 320 exec/s: 0 rss: 72Mb L: 69/123 MS: 1 ChangeBinInt- 00:07:31.219 [2024-07-12 14:36:07.792629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.219 [2024-07-12 14:36:07.792656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.219 #42 NEW cov: 12102 ft: 13843 corp: 11/895b lim: 320 exec/s: 0 rss: 73Mb L: 70/123 MS: 1 ChangeBit- 00:07:31.219 [2024-07-12 14:36:07.842949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8000000 00:07:31.219 [2024-07-12 14:36:07.842975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.219 #48 NEW cov: 12102 ft: 13954 corp: 12/965b lim: 320 exec/s: 0 rss: 73Mb L: 70/123 MS: 1 ChangeBinInt- 00:07:31.219 [2024-07-12 14:36:07.882834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.219 [2024-07-12 14:36:07.882860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.219 #49 NEW cov: 12102 ft: 13962 corp: 13/1088b lim: 320 exec/s: 0 rss: 73Mb L: 123/123 MS: 1 ShuffleBytes- 00:07:31.219 [2024-07-12 14:36:07.932978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.219 [2024-07-12 14:36:07.933004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.219 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:31.219 #50 NEW cov: 12125 ft: 14103 corp: 14/1211b lim: 320 exec/s: 0 rss: 73Mb L: 123/123 MS: 1 ChangeBinInt- 00:07:31.219 [2024-07-12 14:36:07.973227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b8b8b8b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb8b8b8b8b8b8b8b8 00:07:31.219 [2024-07-12 14:36:07.973257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.219 [2024-07-12 14:36:07.973313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b8) qid:0 cid:5 nsid:b8b8b8b8 cdw10:0000b8b8 cdw11:00000000 00:07:31.219 [2024-07-12 14:36:07.973328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.219 #51 NEW cov: 12126 ft: 14260 corp: 15/1380b lim: 320 exec/s: 0 rss: 73Mb L: 169/169 MS: 1 InsertRepeatedBytes- 00:07:31.478 [2024-07-12 14:36:08.013223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e 00:07:31.478 [2024-07-12 14:36:08.013250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.478 #52 NEW cov: 12126 ft: 14391 corp: 16/1491b lim: 320 exec/s: 0 rss: 73Mb L: 111/169 MS: 1 InsertRepeatedBytes- 00:07:31.478 [2024-07-12 14:36:08.053341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2e 00:07:31.478 [2024-07-12 14:36:08.053368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.478 #53 NEW cov: 12126 ft: 14404 corp: 17/1602b lim: 320 exec/s: 53 rss: 73Mb L: 111/169 MS: 1 ChangeBinInt- 00:07:31.478 [2024-07-12 14:36:08.103464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.478 [2024-07-12 14:36:08.103490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.478 #58 NEW cov: 12126 ft: 14423 corp: 18/1666b lim: 320 exec/s: 58 rss: 73Mb L: 64/169 MS: 5 EraseBytes-ChangeByte-InsertByte-CopyPart-InsertByte- 00:07:31.478 [2024-07-12 14:36:08.143597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.478 [2024-07-12 14:36:08.143622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.478 #59 NEW cov: 12126 ft: 14443 corp: 19/1736b lim: 320 exec/s: 59 rss: 73Mb L: 70/169 MS: 1 CrossOver- 00:07:31.478 [2024-07-12 14:36:08.193861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b8b8b8b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb8b8b8b8b8b8b8b8 00:07:31.478 [2024-07-12 14:36:08.193887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.478 [2024-07-12 14:36:08.193943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b8) qid:0 cid:5 nsid:b8b8b8b8 cdw10:0000b8b8 cdw11:00000000 00:07:31.478 [2024-07-12 14:36:08.193957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.478 #60 NEW cov: 12126 ft: 14448 corp: 20/1905b lim: 320 exec/s: 60 rss: 73Mb L: 169/169 MS: 1 ShuffleBytes- 00:07:31.478 [2024-07-12 14:36:08.243988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b8b8b8b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb8b8b8b8b8b8b8b8 00:07:31.478 [2024-07-12 14:36:08.244014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.478 [2024-07-12 14:36:08.244070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b8) qid:0 cid:5 nsid:b8b8b8b8 cdw10:0000b8b8 cdw11:00000000 00:07:31.478 [2024-07-12 14:36:08.244085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.478 #61 NEW cov: 12126 ft: 14476 corp: 21/2074b lim: 320 exec/s: 61 rss: 73Mb L: 169/169 MS: 1 ChangeBit- 00:07:31.736 [2024-07-12 14:36:08.284004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.736 [2024-07-12 14:36:08.284030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.736 #63 NEW cov: 12126 ft: 14558 corp: 22/2198b lim: 320 exec/s: 63 rss: 73Mb L: 124/169 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:31.736 [2024-07-12 14:36:08.334126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.736 [2024-07-12 14:36:08.334153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.736 #69 NEW cov: 12126 ft: 14565 corp: 23/2321b lim: 320 exec/s: 69 rss: 73Mb L: 123/169 MS: 1 ShuffleBytes- 00:07:31.736 [2024-07-12 14:36:08.384270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.736 [2024-07-12 14:36:08.384297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.736 #70 NEW cov: 12126 ft: 14609 corp: 24/2390b lim: 320 exec/s: 70 rss: 73Mb L: 69/169 MS: 1 ShuffleBytes- 00:07:31.736 [2024-07-12 14:36:08.424648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b8b8b8b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb8b8b8b8b8b8b8b8 00:07:31.736 [2024-07-12 14:36:08.424674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.736 [2024-07-12 14:36:08.424730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b8) qid:0 cid:5 nsid:b8b8b8b8 cdw10:0000b8b8 cdw11:00000000 00:07:31.736 [2024-07-12 14:36:08.424744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.736 [2024-07-12 14:36:08.424798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:07:31.736 [2024-07-12 14:36:08.424812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.736 #71 NEW cov: 12127 ft: 14797 corp: 25/2619b lim: 320 exec/s: 71 rss: 73Mb L: 229/229 MS: 1 InsertRepeatedBytes- 00:07:31.736 [2024-07-12 14:36:08.474538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.736 [2024-07-12 14:36:08.474564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.736 #72 NEW cov: 12127 ft: 14837 corp: 26/2727b lim: 320 exec/s: 72 rss: 73Mb L: 108/229 MS: 1 ShuffleBytes- 00:07:31.736 [2024-07-12 14:36:08.514691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.736 [2024-07-12 14:36:08.514717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.995 #73 NEW cov: 12127 ft: 14869 corp: 27/2850b lim: 320 exec/s: 73 rss: 73Mb L: 123/229 MS: 1 CopyPart- 00:07:31.995 [2024-07-12 14:36:08.564911] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:b8b8b8b8 SGL TRANSPORT DATA BLOCK TRANSPORT 0xb8b8b8b8b8b8b8b8 00:07:31.995 [2024-07-12 14:36:08.564937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.995 [2024-07-12 14:36:08.564997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (b8) qid:0 cid:5 nsid:b8b8b8b8 cdw10:0000b8b8 cdw11:00000000 00:07:31.995 [2024-07-12 14:36:08.565011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.995 #74 NEW cov: 12127 ft: 14882 corp: 28/3019b lim: 320 exec/s: 74 rss: 73Mb L: 169/229 MS: 1 ShuffleBytes- 00:07:31.995 [2024-07-12 14:36:08.605034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.995 [2024-07-12 14:36:08.605062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.995 [2024-07-12 14:36:08.605113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:31.995 [2024-07-12 14:36:08.605126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.995 #75 NEW cov: 12127 ft: 14911 corp: 29/3202b lim: 320 exec/s: 75 rss: 73Mb L: 183/229 MS: 1 InsertRepeatedBytes- 00:07:31.995 [2024-07-12 14:36:08.655067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.995 [2024-07-12 14:36:08.655095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.995 #76 NEW cov: 12127 ft: 14924 corp: 30/3325b lim: 320 exec/s: 76 rss: 73Mb L: 123/229 MS: 1 ChangeByte- 00:07:31.995 [2024-07-12 14:36:08.705220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.995 [2024-07-12 14:36:08.705248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.995 #77 NEW cov: 12127 ft: 14936 corp: 31/3406b lim: 320 exec/s: 77 rss: 73Mb L: 81/229 MS: 1 CopyPart- 00:07:31.995 [2024-07-12 14:36:08.755339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:31.995 [2024-07-12 14:36:08.755366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.995 #78 NEW cov: 12127 ft: 14952 corp: 32/3475b lim: 320 exec/s: 78 rss: 73Mb L: 69/229 MS: 1 CopyPart- 00:07:32.253 [2024-07-12 14:36:08.795437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1000002e 00:07:32.253 [2024-07-12 14:36:08.795464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.253 #79 NEW cov: 12127 ft: 14971 corp: 33/3586b lim: 320 exec/s: 79 rss: 74Mb L: 111/229 MS: 1 ChangeBit- 00:07:32.253 [2024-07-12 14:36:08.845614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.253 [2024-07-12 14:36:08.845639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.253 #80 NEW cov: 12127 ft: 14973 corp: 34/3709b lim: 320 exec/s: 80 rss: 74Mb L: 123/229 MS: 1 ChangeByte- 00:07:32.253 [2024-07-12 14:36:08.885698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:32.253 [2024-07-12 14:36:08.885724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.253 #84 NEW cov: 12127 ft: 14987 corp: 35/3833b lim: 320 exec/s: 84 rss: 74Mb L: 124/229 MS: 4 ShuffleBytes-ChangeByte-CrossOver-CrossOver- 00:07:32.253 [2024-07-12 14:36:08.925841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.253 [2024-07-12 14:36:08.925868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.253 #85 NEW cov: 12127 ft: 15000 corp: 36/3956b lim: 320 exec/s: 85 rss: 74Mb L: 123/229 MS: 1 ChangeBinInt- 00:07:32.253 [2024-07-12 14:36:08.975989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.253 [2024-07-12 14:36:08.976014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.253 #86 NEW cov: 12127 ft: 15005 corp: 37/4045b lim: 320 exec/s: 86 rss: 74Mb L: 89/229 MS: 1 EraseBytes- 00:07:32.253 [2024-07-12 14:36:09.026330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8a) qid:0 cid:4 nsid:920000 cdw10:00000000 cdw11:002e2e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.253 [2024-07-12 14:36:09.026357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.253 [2024-07-12 14:36:09.026406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:9f9f9f9f cdw11:9f9f9f9f 00:07:32.253 [2024-07-12 14:36:09.026420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.253 [2024-07-12 14:36:09.026468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:9f9f9f9f cdw11:009f9f9f 00:07:32.253 [2024-07-12 14:36:09.026481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.512 #87 NEW cov: 12127 ft: 15046 corp: 38/4265b lim: 320 exec/s: 43 rss: 74Mb L: 220/229 MS: 1 CrossOver- 00:07:32.512 #87 DONE cov: 12127 ft: 15046 corp: 38/4265b lim: 320 exec/s: 43 rss: 74Mb 00:07:32.512 Done 87 runs in 2 second(s) 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.512 14:36:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:32.513 [2024-07-12 14:36:09.255320] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:32.513 [2024-07-12 14:36:09.255414] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424113 ] 00:07:32.513 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.079 [2024-07-12 14:36:09.591461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.079 [2024-07-12 14:36:09.686827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.079 [2024-07-12 14:36:09.745994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.079 [2024-07-12 14:36:09.762193] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:33.079 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.079 INFO: Seed: 125213086 00:07:33.079 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:33.079 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:33.079 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:33.079 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.079 #2 INITED exec/s: 0 rss: 64Mb 00:07:33.079 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.079 This may also happen if the target rejected all inputs we tried so far 00:07:33.079 [2024-07-12 14:36:09.817292] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000a0a 00:07:33.079 [2024-07-12 14:36:09.817500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2f31812f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.079 [2024-07-12 14:36:09.817535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.645 NEW_FUNC[1/696]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:33.645 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.645 #7 NEW cov: 11913 ft: 11928 corp: 2/7b lim: 30 exec/s: 0 rss: 72Mb L: 6/6 MS: 5 InsertByte-ShuffleBytes-InsertByte-ChangeByte-CopyPart- 00:07:33.645 [2024-07-12 14:36:10.168340] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100005893 00:07:33.645 [2024-07-12 14:36:10.168629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff8124 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.168684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.645 #8 NEW cov: 12059 ft: 12576 corp: 3/16b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\377$}X\223N\253\260"- 00:07:33.645 [2024-07-12 14:36:10.218224] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:07:33.645 [2024-07-12 14:36:10.218438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.218464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.645 #10 NEW cov: 12065 ft: 12848 corp: 4/22b lim: 30 exec/s: 0 rss: 72Mb L: 6/9 MS: 2 CopyPart-CMP- DE: "\000\000\002\000"- 00:07:33.645 [2024-07-12 14:36:10.258393] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f3f3 00:07:33.645 [2024-07-12 14:36:10.258506] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f3f3 00:07:33.645 [2024-07-12 14:36:10.258732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0af383f3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.258757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.645 [2024-07-12 14:36:10.258809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:f3f383f3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.258822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.645 #11 NEW cov: 12150 ft: 13455 corp: 5/35b lim: 30 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:07:33.645 [2024-07-12 14:36:10.298673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.298699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.645 #12 NEW cov: 12182 ft: 13544 corp: 6/44b lim: 30 exec/s: 0 rss: 72Mb L: 9/13 MS: 1 CMP- DE: "\003\000\000\000\000\000\000\000"- 00:07:33.645 [2024-07-12 14:36:10.338751] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (32000) > len (44) 00:07:33.645 [2024-07-12 14:36:10.338863] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (877136) > buf size (4096) 00:07:33.645 [2024-07-12 14:36:10.339076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.339101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.645 [2024-07-12 14:36:10.339156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.339170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.645 [2024-07-12 14:36:10.339223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:5893834e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.339236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.645 #13 NEW cov: 12196 ft: 13959 corp: 7/62b lim: 30 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 CrossOver- 00:07:33.645 [2024-07-12 14:36:10.388852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.388878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.645 #14 NEW cov: 12196 ft: 14068 corp: 8/71b lim: 30 exec/s: 0 rss: 72Mb L: 9/18 MS: 1 CopyPart- 00:07:33.645 [2024-07-12 14:36:10.428861] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (68612) > buf size (4096) 00:07:33.645 [2024-07-12 14:36:10.429086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:43000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.645 [2024-07-12 14:36:10.429111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.903 #20 NEW cov: 12196 ft: 14145 corp: 9/80b lim: 30 exec/s: 0 rss: 72Mb L: 9/18 MS: 1 ChangeBit- 00:07:33.903 [2024-07-12 14:36:10.478947] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:07:33.903 [2024-07-12 14:36:10.479149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.903 [2024-07-12 14:36:10.479175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.903 #21 NEW cov: 12196 ft: 14194 corp: 10/86b lim: 30 exec/s: 0 rss: 72Mb L: 6/18 MS: 1 ChangeByte- 00:07:33.903 [2024-07-12 14:36:10.529131] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f3f3 00:07:33.903 [2024-07-12 14:36:10.529243] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f3f3 00:07:33.904 [2024-07-12 14:36:10.529445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0af383f3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.529470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.904 [2024-07-12 14:36:10.529532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:f3f383f3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.529547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.904 #22 NEW cov: 12196 ft: 14248 corp: 11/99b lim: 30 exec/s: 0 rss: 73Mb L: 13/18 MS: 1 ChangeBit- 00:07:33.904 [2024-07-12 14:36:10.579487] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b091 00:07:33.904 [2024-07-12 14:36:10.579615] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004eab 00:07:33.904 [2024-07-12 14:36:10.579833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.579858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.904 [2024-07-12 14:36:10.579915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.579929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.904 [2024-07-12 14:36:10.579983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000021f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.579996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.904 [2024-07-12 14:36:10.580050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7d008358 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.580066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.904 #23 NEW cov: 12196 ft: 14745 corp: 12/125b lim: 30 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 CMP- DE: "\000\000\000\000\037\216\260\221"- 00:07:33.904 [2024-07-12 14:36:10.629348] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1f8e 00:07:33.904 [2024-07-12 14:36:10.629542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.629584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.904 #24 NEW cov: 12196 ft: 14770 corp: 13/134b lim: 30 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 PersAutoDict- DE: "\000\000\000\000\037\216\260\221"- 00:07:33.904 [2024-07-12 14:36:10.679472] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100005893 00:07:33.904 [2024-07-12 14:36:10.679690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff8124 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.904 [2024-07-12 14:36:10.679716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.162 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:34.162 #25 NEW cov: 12219 ft: 14818 corp: 14/143b lim: 30 exec/s: 0 rss: 73Mb L: 9/26 MS: 1 ChangeBit- 00:07:34.162 [2024-07-12 14:36:10.729795] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000247d 00:07:34.162 [2024-07-12 14:36:10.729903] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ab0a 00:07:34.162 [2024-07-12 14:36:10.730091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.162 [2024-07-12 14:36:10.730116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.162 [2024-07-12 14:36:10.730172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.162 [2024-07-12 14:36:10.730186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.162 [2024-07-12 14:36:10.730238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00580293 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.162 [2024-07-12 14:36:10.730251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.162 #26 NEW cov: 12219 ft: 14849 corp: 15/162b lim: 30 exec/s: 0 rss: 73Mb L: 19/26 MS: 1 InsertByte- 00:07:34.162 [2024-07-12 14:36:10.769769] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:34.162 [2024-07-12 14:36:10.769879] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:34.162 [2024-07-12 14:36:10.770078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.162 [2024-07-12 14:36:10.770102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.162 [2024-07-12 14:36:10.770157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.162 [2024-07-12 14:36:10.770171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.162 #27 NEW cov: 12219 ft: 14865 corp: 16/175b lim: 30 exec/s: 0 rss: 73Mb L: 13/26 MS: 1 InsertRepeatedBytes- 00:07:34.163 [2024-07-12 14:36:10.809845] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:07:34.163 [2024-07-12 14:36:10.810043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a31022f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.163 [2024-07-12 14:36:10.810071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.163 #28 NEW cov: 12219 ft: 14878 corp: 17/181b lim: 30 exec/s: 28 rss: 73Mb L: 6/26 MS: 1 CrossOver- 00:07:34.163 [2024-07-12 14:36:10.850013] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:07:34.163 [2024-07-12 14:36:10.850121] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:34.163 [2024-07-12 14:36:10.850323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.163 [2024-07-12 14:36:10.850347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.163 [2024-07-12 14:36:10.850401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.163 [2024-07-12 14:36:10.850415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.163 #29 NEW cov: 12219 ft: 14907 corp: 18/194b lim: 30 exec/s: 29 rss: 73Mb L: 13/26 MS: 1 ChangeBinInt- 00:07:34.163 [2024-07-12 14:36:10.900121] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000a0a 00:07:34.163 [2024-07-12 14:36:10.900331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2f31816f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.163 [2024-07-12 14:36:10.900356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.163 #30 NEW cov: 12219 ft: 14921 corp: 19/200b lim: 30 exec/s: 30 rss: 73Mb L: 6/26 MS: 1 ChangeBit- 00:07:34.421 [2024-07-12 14:36:10.950459] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000aff 00:07:34.421 [2024-07-12 14:36:10.950581] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ab0a 00:07:34.421 [2024-07-12 14:36:10.950778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:10.950804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.421 [2024-07-12 14:36:10.950859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:007d8158 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:10.950873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.421 [2024-07-12 14:36:10.950926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00240293 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:10.950939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.421 #31 NEW cov: 12219 ft: 14979 corp: 20/219b lim: 30 exec/s: 31 rss: 73Mb L: 19/26 MS: 1 ShuffleBytes- 00:07:34.421 [2024-07-12 14:36:11.000545] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (16) 00:07:34.421 [2024-07-12 14:36:11.000746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:11.000770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.421 [2024-07-12 14:36:11.000826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00030000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:11.000840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.421 #32 NEW cov: 12219 ft: 14986 corp: 21/232b lim: 30 exec/s: 32 rss: 73Mb L: 13/26 MS: 1 CrossOver- 00:07:34.421 [2024-07-12 14:36:11.050566] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3d 00:07:34.421 [2024-07-12 14:36:11.050780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:43000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:11.050806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.421 #33 NEW cov: 12219 ft: 15002 corp: 22/242b lim: 30 exec/s: 33 rss: 73Mb L: 10/26 MS: 1 InsertByte- 00:07:34.421 [2024-07-12 14:36:11.090684] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:34.421 [2024-07-12 14:36:11.090985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:43000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:11.091011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.421 #34 NEW cov: 12220 ft: 15012 corp: 23/256b lim: 30 exec/s: 34 rss: 73Mb L: 14/26 MS: 1 CrossOver- 00:07:34.421 [2024-07-12 14:36:11.130804] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xee 00:07:34.421 [2024-07-12 14:36:11.131119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:43000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:11.131145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.421 #40 NEW cov: 12220 ft: 15022 corp: 24/270b lim: 30 exec/s: 40 rss: 73Mb L: 14/26 MS: 1 ChangeBinInt- 00:07:34.421 [2024-07-12 14:36:11.180938] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:07:34.421 [2024-07-12 14:36:11.181048] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:34.421 [2024-07-12 14:36:11.181251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:11.181276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.421 [2024-07-12 14:36:11.181332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.421 [2024-07-12 14:36:11.181359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.679 #41 NEW cov: 12220 ft: 15052 corp: 25/283b lim: 30 exec/s: 41 rss: 73Mb L: 13/26 MS: 1 ChangeBit- 00:07:34.679 [2024-07-12 14:36:11.231098] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1f 00:07:34.679 [2024-07-12 14:36:11.231214] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (670404) > buf size (4096) 00:07:34.679 [2024-07-12 14:36:11.231413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.679 [2024-07-12 14:36:11.231438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.679 [2024-07-12 14:36:11.231491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8eb00291 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.679 [2024-07-12 14:36:11.231506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.679 #42 NEW cov: 12220 ft: 15060 corp: 26/300b lim: 30 exec/s: 42 rss: 73Mb L: 17/26 MS: 1 CopyPart- 00:07:34.679 [2024-07-12 14:36:11.281205] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3d 00:07:34.679 [2024-07-12 14:36:11.281319] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:07:34.679 [2024-07-12 14:36:11.281517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:43000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.679 [2024-07-12 14:36:11.281544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.679 [2024-07-12 14:36:11.281619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.679 [2024-07-12 14:36:11.281633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.679 #43 NEW cov: 12220 ft: 15073 corp: 27/314b lim: 30 exec/s: 43 rss: 74Mb L: 14/26 MS: 1 PersAutoDict- DE: "\000\000\002\000"- 00:07:34.679 [2024-07-12 14:36:11.331327] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (50368) > buf size (4096) 00:07:34.679 [2024-07-12 14:36:11.331545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:312f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.679 [2024-07-12 14:36:11.331569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.679 #45 NEW cov: 12220 ft: 15080 corp: 28/325b lim: 30 exec/s: 45 rss: 74Mb L: 11/26 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:34.679 [2024-07-12 14:36:11.381501] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:34.679 [2024-07-12 14:36:11.381623] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:34.680 [2024-07-12 14:36:11.381822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.680 [2024-07-12 14:36:11.381848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.680 [2024-07-12 14:36:11.381916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.680 [2024-07-12 14:36:11.381930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.680 #46 NEW cov: 12220 ft: 15086 corp: 29/339b lim: 30 exec/s: 46 rss: 74Mb L: 14/26 MS: 1 InsertByte- 00:07:34.680 [2024-07-12 14:36:11.421555] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000a0a 00:07:34.680 [2024-07-12 14:36:11.421757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2f31816f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.680 [2024-07-12 14:36:11.421782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.680 #47 NEW cov: 12220 ft: 15100 corp: 30/346b lim: 30 exec/s: 47 rss: 74Mb L: 7/26 MS: 1 InsertByte- 00:07:34.938 [2024-07-12 14:36:11.471757] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f3f3 00:07:34.938 [2024-07-12 14:36:11.471874] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (511952) > buf size (4096) 00:07:34.939 [2024-07-12 14:36:11.472077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0af383f3 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.472103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.472158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:f3f381f3 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.472172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.939 #48 NEW cov: 12220 ft: 15158 corp: 31/359b lim: 30 exec/s: 48 rss: 74Mb L: 13/26 MS: 1 CMP- DE: "\001\000\000\014"- 00:07:34.939 [2024-07-12 14:36:11.512125] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b0ff 00:07:34.939 [2024-07-12 14:36:11.512246] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524288) > buf size (4096) 00:07:34.939 [2024-07-12 14:36:11.512350] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (877136) > buf size (4096) 00:07:34.939 [2024-07-12 14:36:11.512554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.512579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.512633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.512647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.512700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000021f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.512714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.512764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff8104 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.512778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.512829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:5893834e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.512842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.939 #49 NEW cov: 12220 ft: 15226 corp: 32/389b lim: 30 exec/s: 49 rss: 74Mb L: 30/30 MS: 1 CMP- DE: "\377\377\377\004"- 00:07:34.939 [2024-07-12 14:36:11.551962] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b0ff 00:07:34.939 [2024-07-12 14:36:11.552073] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786432) > buf size (4096) 00:07:34.939 [2024-07-12 14:36:11.552273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000021f cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.552298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.552352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff0291 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.552367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.939 #50 NEW cov: 12220 ft: 15269 corp: 33/406b lim: 30 exec/s: 50 rss: 74Mb L: 17/30 MS: 1 CrossOver- 00:07:34.939 [2024-07-12 14:36:11.602068] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000400a 00:07:34.939 [2024-07-12 14:36:11.602280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.602305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.939 #51 NEW cov: 12220 ft: 15270 corp: 34/412b lim: 30 exec/s: 51 rss: 74Mb L: 6/30 MS: 1 ChangeBit- 00:07:34.939 [2024-07-12 14:36:11.652216] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (68652) > buf size (4096) 00:07:34.939 [2024-07-12 14:36:11.652419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:430a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.652443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.939 #52 NEW cov: 12220 ft: 15285 corp: 35/422b lim: 30 exec/s: 52 rss: 74Mb L: 10/30 MS: 1 CrossOver- 00:07:34.939 [2024-07-12 14:36:11.692650] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100005893 00:07:34.939 [2024-07-12 14:36:11.692762] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (342704) > buf size (4096) 00:07:34.939 [2024-07-12 14:36:11.692865] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (877136) > buf size (4096) 00:07:34.939 [2024-07-12 14:36:11.693065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.693090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.693144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000a00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.693158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.693213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00ff8124 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.693226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.693279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:4eab81b0 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.693293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.939 [2024-07-12 14:36:11.693344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:5893834e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.939 [2024-07-12 14:36:11.693357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.198 #53 NEW cov: 12220 ft: 15293 corp: 36/452b lim: 30 exec/s: 53 rss: 74Mb L: 30/30 MS: 1 PersAutoDict- DE: "\377$}X\223N\253\260"- 00:07:35.198 [2024-07-12 14:36:11.742509] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000ffff 00:07:35.198 [2024-07-12 14:36:11.742629] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:35.198 [2024-07-12 14:36:11.742833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.198 [2024-07-12 14:36:11.742857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.198 [2024-07-12 14:36:11.742914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.199 [2024-07-12 14:36:11.742927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.199 #54 NEW cov: 12220 ft: 15298 corp: 37/466b lim: 30 exec/s: 54 rss: 74Mb L: 14/30 MS: 1 ChangeByte- 00:07:35.199 [2024-07-12 14:36:11.792602] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x120d 00:07:35.199 [2024-07-12 14:36:11.792825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.199 [2024-07-12 14:36:11.792850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.199 #55 NEW cov: 12220 ft: 15308 corp: 38/475b lim: 30 exec/s: 27 rss: 74Mb L: 9/30 MS: 1 CrossOver- 00:07:35.199 #55 DONE cov: 12220 ft: 15308 corp: 38/475b lim: 30 exec/s: 27 rss: 74Mb 00:07:35.199 ###### Recommended dictionary. ###### 00:07:35.199 "\377$}X\223N\253\260" # Uses: 1 00:07:35.199 "\000\000\002\000" # Uses: 2 00:07:35.199 "\003\000\000\000\000\000\000\000" # Uses: 0 00:07:35.199 "\000\000\000\000\037\216\260\221" # Uses: 1 00:07:35.199 "\001\000\000\014" # Uses: 0 00:07:35.199 "\377\377\377\004" # Uses: 0 00:07:35.199 ###### End of recommended dictionary. ###### 00:07:35.199 Done 55 runs in 2 second(s) 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:35.199 14:36:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:35.458 [2024-07-12 14:36:11.986565] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:35.458 [2024-07-12 14:36:11.986637] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424473 ] 00:07:35.458 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.458 [2024-07-12 14:36:12.193404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.717 [2024-07-12 14:36:12.266719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.717 [2024-07-12 14:36:12.326182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.717 [2024-07-12 14:36:12.342398] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:35.717 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.717 INFO: Seed: 2706225434 00:07:35.717 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:35.717 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:35.717 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:35.717 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.717 #2 INITED exec/s: 0 rss: 65Mb 00:07:35.717 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.717 This may also happen if the target rejected all inputs we tried so far 00:07:35.717 [2024-07-12 14:36:12.407511] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:35.717 [2024-07-12 14:36:12.407736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.717 [2024-07-12 14:36:12.407767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.995 NEW_FUNC[1/695]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:35.995 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.995 #5 NEW cov: 11884 ft: 11883 corp: 2/13b lim: 35 exec/s: 0 rss: 71Mb L: 12/12 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:07:35.995 [2024-07-12 14:36:12.748762] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:35.995 [2024-07-12 14:36:12.748915] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:35.995 [2024-07-12 14:36:12.749175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.996 [2024-07-12 14:36:12.749253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.996 [2024-07-12 14:36:12.749362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.996 [2024-07-12 14:36:12.749407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.267 #6 NEW cov: 12014 ft: 12966 corp: 3/29b lim: 35 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 CopyPart- 00:07:36.267 [2024-07-12 14:36:12.808552] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.808670] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.808967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.808996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.267 [2024-07-12 14:36:12.809055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.809075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.267 [2024-07-12 14:36:12.809133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.809149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.267 #7 NEW cov: 12030 ft: 13434 corp: 4/50b lim: 35 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 CopyPart- 00:07:36.267 [2024-07-12 14:36:12.858617] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.858733] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.858932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.858959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.267 [2024-07-12 14:36:12.859013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.859029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.267 #8 NEW cov: 12115 ft: 13688 corp: 5/66b lim: 35 exec/s: 0 rss: 72Mb L: 16/21 MS: 1 ShuffleBytes- 00:07:36.267 [2024-07-12 14:36:12.898757] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.898870] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.899072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.899099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.267 [2024-07-12 14:36:12.899154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.899170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.267 #9 NEW cov: 12115 ft: 13770 corp: 6/85b lim: 35 exec/s: 0 rss: 72Mb L: 19/21 MS: 1 CrossOver- 00:07:36.267 [2024-07-12 14:36:12.938855] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.939062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.939088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.267 #10 NEW cov: 12115 ft: 13859 corp: 7/96b lim: 35 exec/s: 0 rss: 72Mb L: 11/21 MS: 1 EraseBytes- 00:07:36.267 [2024-07-12 14:36:12.989140] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.989275] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:12.989610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.989635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.267 [2024-07-12 14:36:12.989687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.989703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.267 [2024-07-12 14:36:12.989755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:12.989768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.267 #11 NEW cov: 12115 ft: 14022 corp: 8/122b lim: 35 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:36.267 [2024-07-12 14:36:13.029140] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:13.029256] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.267 [2024-07-12 14:36:13.029463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:13.029490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.267 [2024-07-12 14:36:13.029548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.267 [2024-07-12 14:36:13.029563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.526 #12 NEW cov: 12115 ft: 14056 corp: 9/139b lim: 35 exec/s: 0 rss: 72Mb L: 17/26 MS: 1 CrossOver- 00:07:36.526 [2024-07-12 14:36:13.079297] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.079410] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.079641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.079669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.526 [2024-07-12 14:36:13.079727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00f50000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.079745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.526 #13 NEW cov: 12115 ft: 14090 corp: 10/158b lim: 35 exec/s: 0 rss: 72Mb L: 19/26 MS: 1 ChangeByte- 00:07:36.526 [2024-07-12 14:36:13.129357] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.129590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.129618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.526 #14 NEW cov: 12115 ft: 14149 corp: 11/170b lim: 35 exec/s: 0 rss: 72Mb L: 12/26 MS: 1 ChangeBit- 00:07:36.526 [2024-07-12 14:36:13.169541] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.169660] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.169877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.169904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.526 [2024-07-12 14:36:13.169960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.169975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.526 #15 NEW cov: 12115 ft: 14180 corp: 12/186b lim: 35 exec/s: 0 rss: 72Mb L: 16/26 MS: 1 ChangeBit- 00:07:36.526 [2024-07-12 14:36:13.209644] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.209765] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.209986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.210012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.526 [2024-07-12 14:36:13.210069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.210085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.526 #16 NEW cov: 12115 ft: 14192 corp: 13/201b lim: 35 exec/s: 0 rss: 73Mb L: 15/26 MS: 1 EraseBytes- 00:07:36.526 [2024-07-12 14:36:13.249806] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.249923] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.250141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.250171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.526 [2024-07-12 14:36:13.250225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00f50000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.250240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.526 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:36.526 #17 NEW cov: 12138 ft: 14231 corp: 14/220b lim: 35 exec/s: 0 rss: 73Mb L: 19/26 MS: 1 ShuffleBytes- 00:07:36.526 [2024-07-12 14:36:13.299937] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.526 [2024-07-12 14:36:13.300239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00090000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.300266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.526 [2024-07-12 14:36:13.300321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-12 14:36:13.300335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.784 #18 NEW cov: 12138 ft: 14253 corp: 15/238b lim: 35 exec/s: 0 rss: 73Mb L: 18/26 MS: 1 CMP- DE: "\000\011"- 00:07:36.784 [2024-07-12 14:36:13.339997] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.784 [2024-07-12 14:36:13.340215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.784 [2024-07-12 14:36:13.340242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.784 #19 NEW cov: 12138 ft: 14262 corp: 16/248b lim: 35 exec/s: 0 rss: 73Mb L: 10/26 MS: 1 EraseBytes- 00:07:36.784 [2024-07-12 14:36:13.390190] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.390303] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.390499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.390531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.785 [2024-07-12 14:36:13.390584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00f50000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.390599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.785 #20 NEW cov: 12138 ft: 14275 corp: 17/267b lim: 35 exec/s: 20 rss: 73Mb L: 19/26 MS: 1 CopyPart- 00:07:36.785 [2024-07-12 14:36:13.440362] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.440479] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.440779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.440806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.785 [2024-07-12 14:36:13.440859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00f50000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.440879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.785 [2024-07-12 14:36:13.440931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.440945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.785 #21 NEW cov: 12138 ft: 14317 corp: 18/288b lim: 35 exec/s: 21 rss: 73Mb L: 21/26 MS: 1 PersAutoDict- DE: "\000\011"- 00:07:36.785 [2024-07-12 14:36:13.490493] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.490618] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.490835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.490862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.785 [2024-07-12 14:36:13.490916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.490932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.785 #22 NEW cov: 12138 ft: 14342 corp: 19/302b lim: 35 exec/s: 22 rss: 73Mb L: 14/26 MS: 1 CopyPart- 00:07:36.785 [2024-07-12 14:36:13.540597] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.540716] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:36.785 [2024-07-12 14:36:13.540923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.540950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.785 [2024-07-12 14:36:13.541004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:09000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.785 [2024-07-12 14:36:13.541020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.785 #23 NEW cov: 12138 ft: 14391 corp: 20/318b lim: 35 exec/s: 23 rss: 73Mb L: 16/26 MS: 1 PersAutoDict- DE: "\000\011"- 00:07:37.044 [2024-07-12 14:36:13.580773] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.580894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.581296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.581323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.044 [2024-07-12 14:36:13.581376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a0a0000 cdw11:92000092 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.581391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.044 [2024-07-12 14:36:13.581445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:92920092 cdw11:92009292 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.581459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.044 [2024-07-12 14:36:13.581515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:92920092 cdw11:92009292 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.581534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.044 #24 NEW cov: 12138 ft: 14868 corp: 21/352b lim: 35 exec/s: 24 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:37.044 [2024-07-12 14:36:13.630842] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.631139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00090000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.631169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.044 [2024-07-12 14:36:13.631227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.631244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.044 #25 NEW cov: 12138 ft: 14955 corp: 22/370b lim: 35 exec/s: 25 rss: 73Mb L: 18/34 MS: 1 PersAutoDict- DE: "\000\011"- 00:07:37.044 [2024-07-12 14:36:13.680998] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.681111] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.681318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00110000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.681344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.044 [2024-07-12 14:36:13.681398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.681414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.044 #26 NEW cov: 12138 ft: 14978 corp: 23/387b lim: 35 exec/s: 26 rss: 73Mb L: 17/34 MS: 1 ChangeBinInt- 00:07:37.044 [2024-07-12 14:36:13.731079] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.731286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.731313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.044 #27 NEW cov: 12138 ft: 15019 corp: 24/400b lim: 35 exec/s: 27 rss: 73Mb L: 13/34 MS: 1 CrossOver- 00:07:37.044 [2024-07-12 14:36:13.771357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6e6e000a cdw11:6e006e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.771382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.044 #30 NEW cov: 12138 ft: 15069 corp: 25/410b lim: 35 exec/s: 30 rss: 73Mb L: 10/34 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:37.044 [2024-07-12 14:36:13.811306] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.811426] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.044 [2024-07-12 14:36:13.811636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000f30a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.811662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.044 [2024-07-12 14:36:13.811716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00f50000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.044 [2024-07-12 14:36:13.811734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.304 #31 NEW cov: 12138 ft: 15075 corp: 26/429b lim: 35 exec/s: 31 rss: 73Mb L: 19/34 MS: 1 ChangeByte- 00:07:37.304 [2024-07-12 14:36:13.851491] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:13.851620] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:13.851726] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:13.852023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.852050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.304 [2024-07-12 14:36:13.852106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a0a0000 cdw11:00002200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.852121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.304 [2024-07-12 14:36:13.852175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:92000092 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.852191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.304 [2024-07-12 14:36:13.852247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:92920092 cdw11:92009292 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.852260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.304 #32 NEW cov: 12138 ft: 15100 corp: 27/463b lim: 35 exec/s: 32 rss: 73Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:37.304 [2024-07-12 14:36:13.901600] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:13.901713] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:13.901914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.901940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.304 [2024-07-12 14:36:13.901998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:3b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.902017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.304 #33 NEW cov: 12138 ft: 15116 corp: 28/477b lim: 35 exec/s: 33 rss: 74Mb L: 14/34 MS: 1 ChangeByte- 00:07:37.304 [2024-07-12 14:36:13.951756] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:13.951872] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:13.952077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00110000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.952104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.304 [2024-07-12 14:36:13.952157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000cd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:13.952173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.304 #34 NEW cov: 12138 ft: 15125 corp: 29/495b lim: 35 exec/s: 34 rss: 74Mb L: 18/34 MS: 1 InsertByte- 00:07:37.304 [2024-07-12 14:36:14.001906] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:14.002024] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:14.002226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:14.002250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.304 [2024-07-12 14:36:14.002302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:14.002317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.304 #35 NEW cov: 12138 ft: 15128 corp: 30/512b lim: 35 exec/s: 35 rss: 74Mb L: 17/34 MS: 1 CMP- DE: "\001\000\000\020"- 00:07:37.304 [2024-07-12 14:36:14.042019] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:14.042137] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.304 [2024-07-12 14:36:14.042342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00007e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:14.042369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.304 [2024-07-12 14:36:14.042423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:14.042439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.304 #36 NEW cov: 12138 ft: 15135 corp: 31/529b lim: 35 exec/s: 36 rss: 74Mb L: 17/34 MS: 1 InsertByte- 00:07:37.304 [2024-07-12 14:36:14.082314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000023 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.304 [2024-07-12 14:36:14.082341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.563 #37 NEW cov: 12138 ft: 15141 corp: 32/541b lim: 35 exec/s: 37 rss: 74Mb L: 12/34 MS: 1 ChangeByte- 00:07:37.563 [2024-07-12 14:36:14.122228] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.563 [2024-07-12 14:36:14.122349] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.563 [2024-07-12 14:36:14.122563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.563 [2024-07-12 14:36:14.122591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.563 [2024-07-12 14:36:14.122642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.563 [2024-07-12 14:36:14.122657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.563 #38 NEW cov: 12138 ft: 15146 corp: 33/556b lim: 35 exec/s: 38 rss: 74Mb L: 15/34 MS: 1 PersAutoDict- DE: "\000\011"- 00:07:37.563 [2024-07-12 14:36:14.172429] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.563 [2024-07-12 14:36:14.172551] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.563 [2024-07-12 14:36:14.172861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.563 [2024-07-12 14:36:14.172893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.563 [2024-07-12 14:36:14.172953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:10000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.563 [2024-07-12 14:36:14.172971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.563 [2024-07-12 14:36:14.173028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.563 [2024-07-12 14:36:14.173044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.564 #39 NEW cov: 12138 ft: 15157 corp: 34/577b lim: 35 exec/s: 39 rss: 74Mb L: 21/34 MS: 1 ChangeBit- 00:07:37.564 [2024-07-12 14:36:14.212460] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.564 [2024-07-12 14:36:14.212588] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.564 [2024-07-12 14:36:14.212793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:20950000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.564 [2024-07-12 14:36:14.212820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.564 [2024-07-12 14:36:14.212874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:3b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.564 [2024-07-12 14:36:14.212890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.564 #40 NEW cov: 12138 ft: 15193 corp: 35/591b lim: 35 exec/s: 40 rss: 74Mb L: 14/34 MS: 1 ChangeByte- 00:07:37.564 [2024-07-12 14:36:14.262627] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.564 [2024-07-12 14:36:14.262933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f500faff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.564 [2024-07-12 14:36:14.262960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.564 [2024-07-12 14:36:14.263016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0a00ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.564 [2024-07-12 14:36:14.263032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.564 #41 NEW cov: 12138 ft: 15202 corp: 36/608b lim: 35 exec/s: 41 rss: 74Mb L: 17/34 MS: 1 ChangeBinInt- 00:07:37.564 [2024-07-12 14:36:14.302782] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.564 [2024-07-12 14:36:14.302901] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.564 [2024-07-12 14:36:14.303203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.564 [2024-07-12 14:36:14.303231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.564 [2024-07-12 14:36:14.303285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.564 [2024-07-12 14:36:14.303299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.564 [2024-07-12 14:36:14.303351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:b0ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.564 [2024-07-12 14:36:14.303369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.564 #42 NEW cov: 12138 ft: 15203 corp: 37/635b lim: 35 exec/s: 42 rss: 74Mb L: 27/34 MS: 1 InsertByte- 00:07:37.824 [2024-07-12 14:36:14.352896] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.824 [2024-07-12 14:36:14.353018] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.824 [2024-07-12 14:36:14.353239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.824 [2024-07-12 14:36:14.353266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.824 [2024-07-12 14:36:14.353322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.824 [2024-07-12 14:36:14.353338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.824 #43 NEW cov: 12138 ft: 15232 corp: 38/654b lim: 35 exec/s: 43 rss: 74Mb L: 19/34 MS: 1 ShuffleBytes- 00:07:37.824 [2024-07-12 14:36:14.393056] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.824 [2024-07-12 14:36:14.393178] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:37.824 [2024-07-12 14:36:14.393602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.824 [2024-07-12 14:36:14.393630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.824 [2024-07-12 14:36:14.393686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.824 [2024-07-12 14:36:14.393702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.824 [2024-07-12 14:36:14.393756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff000009 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.824 [2024-07-12 14:36:14.393770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.824 [2024-07-12 14:36:14.393825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.824 [2024-07-12 14:36:14.393839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.824 #44 NEW cov: 12138 ft: 15235 corp: 39/682b lim: 35 exec/s: 22 rss: 74Mb L: 28/34 MS: 1 PersAutoDict- DE: "\000\011"- 00:07:37.824 #44 DONE cov: 12138 ft: 15235 corp: 39/682b lim: 35 exec/s: 22 rss: 74Mb 00:07:37.824 ###### Recommended dictionary. ###### 00:07:37.824 "\000\011" # Uses: 5 00:07:37.824 "\001\000\000\020" # Uses: 0 00:07:37.824 ###### End of recommended dictionary. ###### 00:07:37.824 Done 44 runs in 2 second(s) 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.824 14:36:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:37.824 [2024-07-12 14:36:14.599580] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:37.824 [2024-07-12 14:36:14.599654] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424835 ] 00:07:38.083 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.083 [2024-07-12 14:36:14.808916] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.342 [2024-07-12 14:36:14.881965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.342 [2024-07-12 14:36:14.941261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.342 [2024-07-12 14:36:14.957461] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:38.342 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.342 INFO: Seed: 1025240136 00:07:38.342 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:38.342 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:38.342 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:38.342 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.342 #2 INITED exec/s: 0 rss: 65Mb 00:07:38.342 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.342 This may also happen if the target rejected all inputs we tried so far 00:07:38.601 NEW_FUNC[1/684]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:38.601 NEW_FUNC[2/684]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.601 #3 NEW cov: 11790 ft: 11790 corp: 2/10b lim: 20 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\2101\312\376U}%\000"- 00:07:38.859 #4 NEW cov: 11930 ft: 12689 corp: 3/25b lim: 20 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:38.859 #7 NEW cov: 11936 ft: 12967 corp: 4/36b lim: 20 exec/s: 0 rss: 72Mb L: 11/15 MS: 3 CopyPart-InsertByte-PersAutoDict- DE: "\2101\312\376U}%\000"- 00:07:38.859 #8 NEW cov: 12021 ft: 13222 corp: 5/47b lim: 20 exec/s: 0 rss: 72Mb L: 11/15 MS: 1 ShuffleBytes- 00:07:38.859 #9 NEW cov: 12038 ft: 13530 corp: 6/65b lim: 20 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:07:39.116 #10 NEW cov: 12038 ft: 13584 corp: 7/83b lim: 20 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 ChangeByte- 00:07:39.116 #11 NEW cov: 12038 ft: 13674 corp: 8/95b lim: 20 exec/s: 0 rss: 72Mb L: 12/18 MS: 1 EraseBytes- 00:07:39.116 #12 NEW cov: 12038 ft: 13702 corp: 9/113b lim: 20 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:39.116 #13 NEW cov: 12038 ft: 13724 corp: 10/130b lim: 20 exec/s: 0 rss: 72Mb L: 17/18 MS: 1 EraseBytes- 00:07:39.116 #14 NEW cov: 12038 ft: 13779 corp: 11/148b lim: 20 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 PersAutoDict- DE: "\2101\312\376U}%\000"- 00:07:39.374 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:39.374 #15 NEW cov: 12061 ft: 13834 corp: 12/166b lim: 20 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:39.374 #16 NEW cov: 12061 ft: 13881 corp: 13/175b lim: 20 exec/s: 0 rss: 72Mb L: 9/18 MS: 1 ChangeBinInt- 00:07:39.374 #17 NEW cov: 12061 ft: 13899 corp: 14/193b lim: 20 exec/s: 17 rss: 72Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:39.374 #18 NEW cov: 12061 ft: 13945 corp: 15/204b lim: 20 exec/s: 18 rss: 73Mb L: 11/18 MS: 1 EraseBytes- 00:07:39.374 #19 NEW cov: 12061 ft: 13970 corp: 16/222b lim: 20 exec/s: 19 rss: 73Mb L: 18/18 MS: 1 ChangeBinInt- 00:07:39.633 #20 NEW cov: 12061 ft: 13989 corp: 17/240b lim: 20 exec/s: 20 rss: 73Mb L: 18/18 MS: 1 ChangeBit- 00:07:39.633 #21 NEW cov: 12061 ft: 14032 corp: 18/258b lim: 20 exec/s: 21 rss: 73Mb L: 18/18 MS: 1 ShuffleBytes- 00:07:39.633 #22 NEW cov: 12061 ft: 14120 corp: 19/273b lim: 20 exec/s: 22 rss: 73Mb L: 15/18 MS: 1 CopyPart- 00:07:39.633 #23 NEW cov: 12061 ft: 14122 corp: 20/282b lim: 20 exec/s: 23 rss: 73Mb L: 9/18 MS: 1 EraseBytes- 00:07:39.892 #24 NEW cov: 12061 ft: 14134 corp: 21/291b lim: 20 exec/s: 24 rss: 73Mb L: 9/18 MS: 1 ShuffleBytes- 00:07:39.892 #25 NEW cov: 12061 ft: 14138 corp: 22/302b lim: 20 exec/s: 25 rss: 73Mb L: 11/18 MS: 1 ChangeBinInt- 00:07:39.892 #26 NEW cov: 12061 ft: 14152 corp: 23/320b lim: 20 exec/s: 26 rss: 73Mb L: 18/18 MS: 1 InsertByte- 00:07:39.892 #27 NEW cov: 12061 ft: 14190 corp: 24/329b lim: 20 exec/s: 27 rss: 73Mb L: 9/18 MS: 1 ChangeBinInt- 00:07:40.151 #28 NEW cov: 12061 ft: 14200 corp: 25/344b lim: 20 exec/s: 28 rss: 73Mb L: 15/18 MS: 1 InsertRepeatedBytes- 00:07:40.151 #29 NEW cov: 12061 ft: 14235 corp: 26/362b lim: 20 exec/s: 29 rss: 73Mb L: 18/18 MS: 1 PersAutoDict- DE: "\2101\312\376U}%\000"- 00:07:40.151 #30 NEW cov: 12061 ft: 14262 corp: 27/380b lim: 20 exec/s: 30 rss: 73Mb L: 18/18 MS: 1 ChangeBinInt- 00:07:40.151 #31 NEW cov: 12061 ft: 14269 corp: 28/399b lim: 20 exec/s: 31 rss: 73Mb L: 19/19 MS: 1 PersAutoDict- DE: "\2101\312\376U}%\000"- 00:07:40.151 #32 NEW cov: 12061 ft: 14281 corp: 29/408b lim: 20 exec/s: 32 rss: 74Mb L: 9/19 MS: 1 ChangeBit- 00:07:40.410 #33 NEW cov: 12061 ft: 14288 corp: 30/417b lim: 20 exec/s: 33 rss: 74Mb L: 9/19 MS: 1 CMP- DE: "\377\377\377\377\377\000\000\000"- 00:07:40.410 #34 NEW cov: 12061 ft: 14298 corp: 31/431b lim: 20 exec/s: 17 rss: 74Mb L: 14/19 MS: 1 CopyPart- 00:07:40.410 #34 DONE cov: 12061 ft: 14298 corp: 31/431b lim: 20 exec/s: 17 rss: 74Mb 00:07:40.410 ###### Recommended dictionary. ###### 00:07:40.410 "\2101\312\376U}%\000" # Uses: 4 00:07:40.410 "\377\377\377\377\377\000\000\000" # Uses: 0 00:07:40.410 ###### End of recommended dictionary. ###### 00:07:40.410 Done 34 runs in 2 second(s) 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:40.410 14:36:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:40.410 [2024-07-12 14:36:17.189646] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:40.410 [2024-07-12 14:36:17.189719] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425191 ] 00:07:40.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.669 [2024-07-12 14:36:17.400872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.928 [2024-07-12 14:36:17.473445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.928 [2024-07-12 14:36:17.532723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.928 [2024-07-12 14:36:17.548934] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:40.928 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.928 INFO: Seed: 3617256863 00:07:40.928 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:40.928 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:40.928 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:40.928 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.928 #2 INITED exec/s: 0 rss: 65Mb 00:07:40.928 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.928 This may also happen if the target rejected all inputs we tried so far 00:07:40.928 [2024-07-12 14:36:17.626533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.928 [2024-07-12 14:36:17.626579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.187 NEW_FUNC[1/696]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:41.187 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:41.187 #12 NEW cov: 11906 ft: 11907 corp: 2/12b lim: 35 exec/s: 0 rss: 72Mb L: 11/11 MS: 5 InsertByte-InsertByte-ChangeByte-CopyPart-CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:41.187 [2024-07-12 14:36:17.967103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.187 [2024-07-12 14:36:17.967152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.446 #13 NEW cov: 12036 ft: 12424 corp: 3/23b lim: 35 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 ShuffleBytes- 00:07:41.446 [2024-07-12 14:36:18.037346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.446 [2024-07-12 14:36:18.037379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.446 #14 NEW cov: 12042 ft: 12655 corp: 4/34b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ShuffleBytes- 00:07:41.446 [2024-07-12 14:36:18.097560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.446 [2024-07-12 14:36:18.097590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.446 #20 NEW cov: 12127 ft: 13020 corp: 5/45b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ShuffleBytes- 00:07:41.446 [2024-07-12 14:36:18.147763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.446 [2024-07-12 14:36:18.147794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.446 #21 NEW cov: 12127 ft: 13137 corp: 6/56b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ShuffleBytes- 00:07:41.446 [2024-07-12 14:36:18.208264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.446 [2024-07-12 14:36:18.208296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.446 #22 NEW cov: 12127 ft: 13267 corp: 7/65b lim: 35 exec/s: 0 rss: 73Mb L: 9/11 MS: 1 EraseBytes- 00:07:41.705 [2024-07-12 14:36:18.258443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.705 [2024-07-12 14:36:18.258475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 #27 NEW cov: 12127 ft: 13343 corp: 8/74b lim: 35 exec/s: 0 rss: 73Mb L: 9/11 MS: 5 ChangeByte-ChangeByte-ChangeBit-CopyPart-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:41.705 [2024-07-12 14:36:18.308685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.705 [2024-07-12 14:36:18.308715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 #28 NEW cov: 12127 ft: 13402 corp: 9/85b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 ChangeBit- 00:07:41.705 [2024-07-12 14:36:18.358822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.705 [2024-07-12 14:36:18.358852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 #29 NEW cov: 12127 ft: 13456 corp: 10/96b lim: 35 exec/s: 0 rss: 73Mb L: 11/11 MS: 1 CopyPart- 00:07:41.705 [2024-07-12 14:36:18.429696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.705 [2024-07-12 14:36:18.429726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.705 [2024-07-12 14:36:18.429818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:009b0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.705 [2024-07-12 14:36:18.429835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.705 #30 NEW cov: 12127 ft: 14214 corp: 11/113b lim: 35 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 CopyPart- 00:07:41.964 [2024-07-12 14:36:18.500046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.500074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 [2024-07-12 14:36:18.500165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff240000 cdw11:7d570003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.500183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.964 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:41.964 #36 NEW cov: 12150 ft: 14255 corp: 12/132b lim: 35 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CMP- DE: "\377$}W\330\246\"\264"- 00:07:41.964 [2024-07-12 14:36:18.560123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.560150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 [2024-07-12 14:36:18.560238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.560253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.964 #37 NEW cov: 12150 ft: 14268 corp: 13/148b lim: 35 exec/s: 0 rss: 73Mb L: 16/19 MS: 1 CrossOver- 00:07:41.964 [2024-07-12 14:36:18.610013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000005a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.610041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 #38 NEW cov: 12150 ft: 14343 corp: 14/159b lim: 35 exec/s: 38 rss: 73Mb L: 11/19 MS: 1 ShuffleBytes- 00:07:41.964 [2024-07-12 14:36:18.670194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.670221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 #39 NEW cov: 12150 ft: 14372 corp: 15/166b lim: 35 exec/s: 39 rss: 73Mb L: 7/19 MS: 1 CrossOver- 00:07:41.964 [2024-07-12 14:36:18.720730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000475a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.720757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.964 [2024-07-12 14:36:18.720851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.964 [2024-07-12 14:36:18.720868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.223 #40 NEW cov: 12150 ft: 14406 corp: 16/183b lim: 35 exec/s: 40 rss: 73Mb L: 17/19 MS: 1 InsertByte- 00:07:42.223 [2024-07-12 14:36:18.780503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.223 [2024-07-12 14:36:18.780533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.223 #41 NEW cov: 12150 ft: 14441 corp: 17/195b lim: 35 exec/s: 41 rss: 73Mb L: 12/19 MS: 1 InsertRepeatedBytes- 00:07:42.223 [2024-07-12 14:36:18.830679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.223 [2024-07-12 14:36:18.830705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.223 #42 NEW cov: 12150 ft: 14456 corp: 18/203b lim: 35 exec/s: 42 rss: 73Mb L: 8/19 MS: 1 EraseBytes- 00:07:42.223 [2024-07-12 14:36:18.891116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.223 [2024-07-12 14:36:18.891146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.223 #43 NEW cov: 12150 ft: 14503 corp: 19/212b lim: 35 exec/s: 43 rss: 73Mb L: 9/19 MS: 1 ShuffleBytes- 00:07:42.223 [2024-07-12 14:36:18.951718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.223 [2024-07-12 14:36:18.951745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.223 [2024-07-12 14:36:18.951836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c2c263c2 cdw11:c2c20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.223 [2024-07-12 14:36:18.951852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.223 #44 NEW cov: 12150 ft: 14511 corp: 20/230b lim: 35 exec/s: 44 rss: 74Mb L: 18/19 MS: 1 InsertRepeatedBytes- 00:07:42.482 [2024-07-12 14:36:19.011664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.482 [2024-07-12 14:36:19.011693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.482 #45 NEW cov: 12150 ft: 14523 corp: 21/242b lim: 35 exec/s: 45 rss: 74Mb L: 12/19 MS: 1 ChangeByte- 00:07:42.482 [2024-07-12 14:36:19.062022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.482 [2024-07-12 14:36:19.062049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.482 [2024-07-12 14:36:19.062136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff240000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.482 [2024-07-12 14:36:19.062151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.482 #46 NEW cov: 12150 ft: 14546 corp: 22/261b lim: 35 exec/s: 46 rss: 74Mb L: 19/19 MS: 1 ChangeBinInt- 00:07:42.482 [2024-07-12 14:36:19.132246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000005a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.482 [2024-07-12 14:36:19.132275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.482 [2024-07-12 14:36:19.132367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff1effff cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.482 [2024-07-12 14:36:19.132385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.482 #47 NEW cov: 12150 ft: 14583 corp: 23/276b lim: 35 exec/s: 47 rss: 74Mb L: 15/19 MS: 1 CMP- DE: "\377\377\377\036"- 00:07:42.482 [2024-07-12 14:36:19.202407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:5a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.482 [2024-07-12 14:36:19.202435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.482 [2024-07-12 14:36:19.202526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.482 [2024-07-12 14:36:19.202561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.482 #48 NEW cov: 12150 ft: 14613 corp: 24/294b lim: 35 exec/s: 48 rss: 74Mb L: 18/19 MS: 1 CrossOver- 00:07:42.742 [2024-07-12 14:36:19.272446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff5aff cdw11:1e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.272482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.742 #49 NEW cov: 12150 ft: 14678 corp: 25/305b lim: 35 exec/s: 49 rss: 74Mb L: 11/19 MS: 1 PersAutoDict- DE: "\377\377\377\036"- 00:07:42.742 [2024-07-12 14:36:19.323126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.323152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.742 [2024-07-12 14:36:19.323248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c400c4c4 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.323265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.742 #50 NEW cov: 12150 ft: 14698 corp: 26/320b lim: 35 exec/s: 50 rss: 74Mb L: 15/19 MS: 1 InsertRepeatedBytes- 00:07:42.742 [2024-07-12 14:36:19.373602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63630002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.373630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.742 [2024-07-12 14:36:19.373737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c2c263c2 cdw11:c2c20002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.373755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.742 #51 NEW cov: 12150 ft: 14712 corp: 27/338b lim: 35 exec/s: 51 rss: 74Mb L: 18/19 MS: 1 ShuffleBytes- 00:07:42.742 [2024-07-12 14:36:19.434808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:a0a00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.434837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.742 [2024-07-12 14:36:19.434930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:a0a0a0a0 cdw11:a0a00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.434947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.742 [2024-07-12 14:36:19.435044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:a0a0a0a0 cdw11:a0a00001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.435061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.742 [2024-07-12 14:36:19.435152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:a0a0a0a0 cdw11:a0a00000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.435169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.742 #52 NEW cov: 12150 ft: 15110 corp: 28/372b lim: 35 exec/s: 52 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:42.742 [2024-07-12 14:36:19.493892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff5aff cdw11:1e000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.742 [2024-07-12 14:36:19.493921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.001 #53 NEW cov: 12150 ft: 15127 corp: 29/381b lim: 35 exec/s: 53 rss: 74Mb L: 9/34 MS: 1 EraseBytes- 00:07:43.001 [2024-07-12 14:36:19.564816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00005a00 cdw11:00c40003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.001 [2024-07-12 14:36:19.564849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.001 [2024-07-12 14:36:19.564940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c40000c4 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.001 [2024-07-12 14:36:19.564958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.001 #54 NEW cov: 12150 ft: 15133 corp: 30/396b lim: 35 exec/s: 27 rss: 74Mb L: 15/34 MS: 1 ShuffleBytes- 00:07:43.002 #54 DONE cov: 12150 ft: 15133 corp: 30/396b lim: 35 exec/s: 27 rss: 74Mb 00:07:43.002 ###### Recommended dictionary. ###### 00:07:43.002 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:43.002 "\377$}W\330\246\"\264" # Uses: 0 00:07:43.002 "\377\377\377\036" # Uses: 1 00:07:43.002 ###### End of recommended dictionary. ###### 00:07:43.002 Done 54 runs in 2 second(s) 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:43.002 14:36:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:43.002 [2024-07-12 14:36:19.780950] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:43.002 [2024-07-12 14:36:19.781020] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425545 ] 00:07:43.261 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.261 [2024-07-12 14:36:19.991456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.520 [2024-07-12 14:36:20.071698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.520 [2024-07-12 14:36:20.131536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.520 [2024-07-12 14:36:20.147735] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:43.520 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.520 INFO: Seed: 1921303446 00:07:43.520 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:43.520 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:43.520 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:43.520 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.520 #2 INITED exec/s: 0 rss: 64Mb 00:07:43.520 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.520 This may also happen if the target rejected all inputs we tried so far 00:07:43.520 [2024-07-12 14:36:20.213109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.520 [2024-07-12 14:36:20.213139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.779 NEW_FUNC[1/696]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:43.779 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.779 #30 NEW cov: 11917 ft: 11918 corp: 2/15b lim: 45 exec/s: 0 rss: 72Mb L: 14/14 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:43.779 [2024-07-12 14:36:20.554578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4b4b0a4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.779 [2024-07-12 14:36:20.554638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.779 [2024-07-12 14:36:20.554715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.779 [2024-07-12 14:36:20.554741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.779 [2024-07-12 14:36:20.554815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.779 [2024-07-12 14:36:20.554839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.779 [2024-07-12 14:36:20.554912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.779 [2024-07-12 14:36:20.554936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.038 #32 NEW cov: 12047 ft: 13485 corp: 3/51b lim: 45 exec/s: 0 rss: 72Mb L: 36/36 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:44.038 [2024-07-12 14:36:20.603943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.038 [2024-07-12 14:36:20.603969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 #33 NEW cov: 12053 ft: 13712 corp: 4/65b lim: 45 exec/s: 0 rss: 72Mb L: 14/36 MS: 1 ChangeBinInt- 00:07:44.038 [2024-07-12 14:36:20.654116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.038 [2024-07-12 14:36:20.654144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 #37 NEW cov: 12138 ft: 13983 corp: 5/80b lim: 45 exec/s: 0 rss: 72Mb L: 15/36 MS: 4 ChangeBit-ChangeBit-ChangeByte-CrossOver- 00:07:44.038 [2024-07-12 14:36:20.694393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.038 [2024-07-12 14:36:20.694419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 [2024-07-12 14:36:20.694470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.038 [2024-07-12 14:36:20.694486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.038 #38 NEW cov: 12138 ft: 14299 corp: 6/98b lim: 45 exec/s: 0 rss: 72Mb L: 18/36 MS: 1 CrossOver- 00:07:44.038 [2024-07-12 14:36:20.744400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.038 [2024-07-12 14:36:20.744423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 #39 NEW cov: 12138 ft: 14381 corp: 7/113b lim: 45 exec/s: 0 rss: 72Mb L: 15/36 MS: 1 InsertByte- 00:07:44.038 [2024-07-12 14:36:20.784514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0100ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.038 [2024-07-12 14:36:20.784543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.038 #40 NEW cov: 12138 ft: 14432 corp: 8/127b lim: 45 exec/s: 0 rss: 72Mb L: 14/36 MS: 1 ChangeBinInt- 00:07:44.296 [2024-07-12 14:36:20.834864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeefe cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.296 [2024-07-12 14:36:20.834890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.296 #41 NEW cov: 12138 ft: 14548 corp: 9/142b lim: 45 exec/s: 0 rss: 72Mb L: 15/36 MS: 1 ChangeBit- 00:07:44.297 [2024-07-12 14:36:20.874731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:20.874756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.297 #42 NEW cov: 12138 ft: 14571 corp: 10/157b lim: 45 exec/s: 0 rss: 72Mb L: 15/36 MS: 1 ShuffleBytes- 00:07:44.297 [2024-07-12 14:36:20.915021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:20.915047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.297 [2024-07-12 14:36:20.915098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0102ffff cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:20.915111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.297 #43 NEW cov: 12138 ft: 14608 corp: 11/176b lim: 45 exec/s: 0 rss: 72Mb L: 19/36 MS: 1 CMP- DE: "\001\002\000\000"- 00:07:44.297 [2024-07-12 14:36:20.965007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeefe cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:20.965032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.297 #44 NEW cov: 12138 ft: 14648 corp: 12/191b lim: 45 exec/s: 0 rss: 72Mb L: 15/36 MS: 1 ShuffleBytes- 00:07:44.297 [2024-07-12 14:36:21.015628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4b4b0a4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:21.015653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.297 [2024-07-12 14:36:21.015702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:21.015716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.297 [2024-07-12 14:36:21.015770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:21.015783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.297 [2024-07-12 14:36:21.015832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:21.015845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.297 #50 NEW cov: 12138 ft: 14684 corp: 13/227b lim: 45 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeBit- 00:07:44.297 [2024-07-12 14:36:21.065292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.297 [2024-07-12 14:36:21.065317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.554 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:44.554 #51 NEW cov: 12161 ft: 14723 corp: 14/237b lim: 45 exec/s: 0 rss: 73Mb L: 10/36 MS: 1 CrossOver- 00:07:44.554 [2024-07-12 14:36:21.105834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4b4b0a4b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.105859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.105911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:244b0000 cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.105925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.105974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.106002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.106053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:4b4b4b4b cdw11:4b4b0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.106066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.554 #52 NEW cov: 12161 ft: 14814 corp: 15/273b lim: 45 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeBinInt- 00:07:44.554 [2024-07-12 14:36:21.145535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeefe cdw11:69ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.145560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.554 #53 NEW cov: 12161 ft: 14866 corp: 16/288b lim: 45 exec/s: 0 rss: 73Mb L: 15/36 MS: 1 ChangeByte- 00:07:44.554 [2024-07-12 14:36:21.195804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.195829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.195881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.195894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.554 #54 NEW cov: 12161 ft: 14898 corp: 17/306b lim: 45 exec/s: 54 rss: 73Mb L: 18/36 MS: 1 ChangeByte- 00:07:44.554 [2024-07-12 14:36:21.246216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeefe cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.246245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.246296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.246309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.246360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.246373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.246422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.246435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.554 #55 NEW cov: 12161 ft: 14934 corp: 18/350b lim: 45 exec/s: 55 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:44.554 [2024-07-12 14:36:21.285944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeefe cdw11:ffff0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.285970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.554 #56 NEW cov: 12161 ft: 14948 corp: 19/365b lim: 45 exec/s: 56 rss: 73Mb L: 15/44 MS: 1 ChangeBit- 00:07:44.554 [2024-07-12 14:36:21.326172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.326197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.554 [2024-07-12 14:36:21.326249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0102ffff cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.554 [2024-07-12 14:36:21.326263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.812 #57 NEW cov: 12161 ft: 14953 corp: 20/384b lim: 45 exec/s: 57 rss: 73Mb L: 19/44 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:44.812 [2024-07-12 14:36:21.376167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.376191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.812 #59 NEW cov: 12161 ft: 14972 corp: 21/397b lim: 45 exec/s: 59 rss: 73Mb L: 13/44 MS: 2 ChangeBit-CrossOver- 00:07:44.812 [2024-07-12 14:36:21.416300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.416324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.812 #60 NEW cov: 12161 ft: 14978 corp: 22/412b lim: 45 exec/s: 60 rss: 73Mb L: 15/44 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:44.812 [2024-07-12 14:36:21.456397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.456422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.812 #61 NEW cov: 12161 ft: 15041 corp: 23/427b lim: 45 exec/s: 61 rss: 73Mb L: 15/44 MS: 1 CopyPart- 00:07:44.812 [2024-07-12 14:36:21.506813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.506838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.812 [2024-07-12 14:36:21.506904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.506919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.812 [2024-07-12 14:36:21.506971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0102ffff cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.506985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.812 #62 NEW cov: 12161 ft: 15301 corp: 24/454b lim: 45 exec/s: 62 rss: 73Mb L: 27/44 MS: 1 CrossOver- 00:07:44.812 [2024-07-12 14:36:21.556654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeefe cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.556678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.812 #63 NEW cov: 12161 ft: 15311 corp: 25/470b lim: 45 exec/s: 63 rss: 73Mb L: 16/44 MS: 1 InsertByte- 00:07:44.812 [2024-07-12 14:36:21.596854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.812 [2024-07-12 14:36:21.596879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.071 #64 NEW cov: 12161 ft: 15323 corp: 26/483b lim: 45 exec/s: 64 rss: 73Mb L: 13/44 MS: 1 EraseBytes- 00:07:45.071 [2024-07-12 14:36:21.636855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.636880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.071 #65 NEW cov: 12161 ft: 15325 corp: 27/496b lim: 45 exec/s: 65 rss: 73Mb L: 13/44 MS: 1 ChangeBinInt- 00:07:45.071 [2024-07-12 14:36:21.677454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.677479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.071 [2024-07-12 14:36:21.677534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.677548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.071 [2024-07-12 14:36:21.677599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.677612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.071 [2024-07-12 14:36:21.677662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0aff0af5 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.677675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.071 #66 NEW cov: 12161 ft: 15350 corp: 28/534b lim: 45 exec/s: 66 rss: 73Mb L: 38/44 MS: 1 CrossOver- 00:07:45.071 [2024-07-12 14:36:21.727143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:fbff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.727173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.071 #67 NEW cov: 12161 ft: 15360 corp: 29/547b lim: 45 exec/s: 67 rss: 73Mb L: 13/44 MS: 1 ChangeBit- 00:07:45.071 [2024-07-12 14:36:21.767268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0200ff01 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.767293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.071 #73 NEW cov: 12161 ft: 15378 corp: 30/561b lim: 45 exec/s: 73 rss: 73Mb L: 14/44 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:45.071 [2024-07-12 14:36:21.817395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.071 [2024-07-12 14:36:21.817419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.071 #74 NEW cov: 12161 ft: 15385 corp: 31/577b lim: 45 exec/s: 74 rss: 73Mb L: 16/44 MS: 1 InsertByte- 00:07:45.330 [2024-07-12 14:36:21.867541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:01022aff cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.330 [2024-07-12 14:36:21.867565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.330 #75 NEW cov: 12161 ft: 15411 corp: 32/594b lim: 45 exec/s: 75 rss: 73Mb L: 17/44 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:45.330 [2024-07-12 14:36:21.917676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c000ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.330 [2024-07-12 14:36:21.917703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.330 #76 NEW cov: 12161 ft: 15505 corp: 33/609b lim: 45 exec/s: 76 rss: 73Mb L: 15/44 MS: 1 ChangeBinInt- 00:07:45.330 [2024-07-12 14:36:21.957785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.330 [2024-07-12 14:36:21.957810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.330 #77 NEW cov: 12161 ft: 15525 corp: 34/624b lim: 45 exec/s: 77 rss: 73Mb L: 15/44 MS: 1 ChangeBit- 00:07:45.330 [2024-07-12 14:36:21.997905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.330 [2024-07-12 14:36:21.997929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.330 #78 NEW cov: 12161 ft: 15543 corp: 35/639b lim: 45 exec/s: 78 rss: 73Mb L: 15/44 MS: 1 ShuffleBytes- 00:07:45.331 [2024-07-12 14:36:22.038325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.331 [2024-07-12 14:36:22.038349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.331 [2024-07-12 14:36:22.038400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.331 [2024-07-12 14:36:22.038413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.331 [2024-07-12 14:36:22.038464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff01ffff cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.331 [2024-07-12 14:36:22.038477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.331 #79 NEW cov: 12161 ft: 15553 corp: 36/667b lim: 45 exec/s: 79 rss: 73Mb L: 28/44 MS: 1 InsertByte- 00:07:45.331 [2024-07-12 14:36:22.078140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffeeff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.331 [2024-07-12 14:36:22.078165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.331 #80 NEW cov: 12161 ft: 15592 corp: 37/682b lim: 45 exec/s: 80 rss: 74Mb L: 15/44 MS: 1 CMP- DE: "\000\000\000\007"- 00:07:45.590 [2024-07-12 14:36:22.128257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.590 [2024-07-12 14:36:22.128283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.590 #81 NEW cov: 12161 ft: 15596 corp: 38/695b lim: 45 exec/s: 81 rss: 74Mb L: 13/44 MS: 1 PersAutoDict- DE: "\000\000\000\007"- 00:07:45.590 [2024-07-12 14:36:22.178865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff2aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.590 [2024-07-12 14:36:22.178891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.590 [2024-07-12 14:36:22.178945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffeeffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.590 [2024-07-12 14:36:22.178958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.590 [2024-07-12 14:36:22.179023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:0afffff5 cdw11:b0ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.590 [2024-07-12 14:36:22.179037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.590 [2024-07-12 14:36:22.179086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:fbff02ff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.590 [2024-07-12 14:36:22.179099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.590 #87 NEW cov: 12161 ft: 15608 corp: 39/738b lim: 45 exec/s: 43 rss: 74Mb L: 43/44 MS: 1 CrossOver- 00:07:45.590 #87 DONE cov: 12161 ft: 15608 corp: 39/738b lim: 45 exec/s: 43 rss: 74Mb 00:07:45.590 ###### Recommended dictionary. ###### 00:07:45.590 "\001\002\000\000" # Uses: 5 00:07:45.590 "\000\000\000\007" # Uses: 1 00:07:45.590 ###### End of recommended dictionary. ###### 00:07:45.590 Done 87 runs in 2 second(s) 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:45.590 14:36:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:45.849 [2024-07-12 14:36:22.395193] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:45.849 [2024-07-12 14:36:22.395266] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425859 ] 00:07:45.849 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.849 [2024-07-12 14:36:22.604056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.108 [2024-07-12 14:36:22.677274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.108 [2024-07-12 14:36:22.736558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.108 [2024-07-12 14:36:22.752762] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:46.108 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.108 INFO: Seed: 231310065 00:07:46.108 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:46.108 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:46.108 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:46.108 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.108 #2 INITED exec/s: 0 rss: 65Mb 00:07:46.108 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.108 This may also happen if the target rejected all inputs we tried so far 00:07:46.108 [2024-07-12 14:36:22.830022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:46.108 [2024-07-12 14:36:22.830064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.367 NEW_FUNC[1/693]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:46.367 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:46.367 #9 NEW cov: 11833 ft: 11834 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 2 ChangeBinInt-InsertByte- 00:07:46.625 [2024-07-12 14:36:23.170984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:46.625 [2024-07-12 14:36:23.171031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.625 [2024-07-12 14:36:23.171123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:46.625 [2024-07-12 14:36:23.171156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.625 NEW_FUNC[1/1]: 0x1aaa620 in spdk_sock_recv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:461 00:07:46.625 #10 NEW cov: 11964 ft: 12672 corp: 3/7b lim: 10 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:07:46.625 [2024-07-12 14:36:23.240930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:46.625 [2024-07-12 14:36:23.240964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.625 #11 NEW cov: 11970 ft: 12974 corp: 4/9b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:46.625 [2024-07-12 14:36:23.291140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f176 cdw11:00000000 00:07:46.625 [2024-07-12 14:36:23.291167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.625 #12 NEW cov: 12055 ft: 13239 corp: 5/11b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:07:46.625 [2024-07-12 14:36:23.341322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:46.625 [2024-07-12 14:36:23.341350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.625 #13 NEW cov: 12055 ft: 13380 corp: 6/13b lim: 10 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:07:46.625 [2024-07-12 14:36:23.401593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:46.626 [2024-07-12 14:36:23.401620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.884 #14 NEW cov: 12055 ft: 13435 corp: 7/15b lim: 10 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 CopyPart- 00:07:46.884 [2024-07-12 14:36:23.461932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1f1 cdw11:00000000 00:07:46.884 [2024-07-12 14:36:23.461959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.884 [2024-07-12 14:36:23.462060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fff1 cdw11:00000000 00:07:46.884 [2024-07-12 14:36:23.462075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.884 #15 NEW cov: 12055 ft: 13496 corp: 8/20b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:46.884 [2024-07-12 14:36:23.522230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:46.884 [2024-07-12 14:36:23.522257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.884 [2024-07-12 14:36:23.522340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fff1 cdw11:00000000 00:07:46.884 [2024-07-12 14:36:23.522357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.884 #16 NEW cov: 12055 ft: 13540 corp: 9/25b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:46.884 [2024-07-12 14:36:23.582344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003dff cdw11:00000000 00:07:46.884 [2024-07-12 14:36:23.582371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.884 [2024-07-12 14:36:23.582462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:46.884 [2024-07-12 14:36:23.582479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.884 #17 NEW cov: 12055 ft: 13576 corp: 10/29b lim: 10 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:07:46.884 [2024-07-12 14:36:23.632237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1f1 cdw11:00000000 00:07:46.884 [2024-07-12 14:36:23.632264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.884 #18 NEW cov: 12055 ft: 13652 corp: 11/32b lim: 10 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:07:47.143 [2024-07-12 14:36:23.693282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.143 [2024-07-12 14:36:23.693313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.143 [2024-07-12 14:36:23.693409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fff1 cdw11:00000000 00:07:47.143 [2024-07-12 14:36:23.693426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.143 [2024-07-12 14:36:23.693507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.143 [2024-07-12 14:36:23.693525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.144 [2024-07-12 14:36:23.693617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.144 [2024-07-12 14:36:23.693635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.144 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:47.144 #19 NEW cov: 12078 ft: 13938 corp: 12/40b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:47.144 [2024-07-12 14:36:23.762965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000021ff cdw11:00000000 00:07:47.144 [2024-07-12 14:36:23.762992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.144 [2024-07-12 14:36:23.763083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:47.144 [2024-07-12 14:36:23.763101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.144 #20 NEW cov: 12078 ft: 13975 corp: 13/44b lim: 10 exec/s: 0 rss: 73Mb L: 4/8 MS: 1 ChangeByte- 00:07:47.144 [2024-07-12 14:36:23.813256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:47.144 [2024-07-12 14:36:23.813283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.144 [2024-07-12 14:36:23.813372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 00:07:47.144 [2024-07-12 14:36:23.813389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.144 #21 NEW cov: 12078 ft: 14011 corp: 14/48b lim: 10 exec/s: 21 rss: 73Mb L: 4/8 MS: 1 ChangeBinInt- 00:07:47.144 [2024-07-12 14:36:23.873151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1f1 cdw11:00000000 00:07:47.144 [2024-07-12 14:36:23.873176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.144 #22 NEW cov: 12078 ft: 14037 corp: 15/51b lim: 10 exec/s: 22 rss: 73Mb L: 3/8 MS: 1 ShuffleBytes- 00:07:47.403 [2024-07-12 14:36:23.934351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.403 [2024-07-12 14:36:23.934377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:23.934464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fff1 cdw11:00000000 00:07:47.403 [2024-07-12 14:36:23.934481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:23.934575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.403 [2024-07-12 14:36:23.934590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:23.934674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000efff cdw11:00000000 00:07:47.403 [2024-07-12 14:36:23.934689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.403 #23 NEW cov: 12078 ft: 14059 corp: 16/59b lim: 10 exec/s: 23 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:47.403 [2024-07-12 14:36:23.994289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000014ff cdw11:00000000 00:07:47.403 [2024-07-12 14:36:23.994314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:23.994406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.403 [2024-07-12 14:36:23.994423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:23.994503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:47.403 [2024-07-12 14:36:23.994520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.403 #28 NEW cov: 12078 ft: 14198 corp: 17/65b lim: 10 exec/s: 28 rss: 73Mb L: 6/8 MS: 5 CrossOver-ShuffleBytes-ChangeBinInt-ShuffleBytes-CrossOver- 00:07:47.403 [2024-07-12 14:36:24.043821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f139 cdw11:00000000 00:07:47.403 [2024-07-12 14:36:24.043848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.403 #29 NEW cov: 12078 ft: 14218 corp: 18/68b lim: 10 exec/s: 29 rss: 73Mb L: 3/8 MS: 1 InsertByte- 00:07:47.403 [2024-07-12 14:36:24.094979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000505 cdw11:00000000 00:07:47.403 [2024-07-12 14:36:24.095004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:24.095093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000505 cdw11:00000000 00:07:47.403 [2024-07-12 14:36:24.095110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:24.095194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000505 cdw11:00000000 00:07:47.403 [2024-07-12 14:36:24.095208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:24.095294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000505 cdw11:00000000 00:07:47.403 [2024-07-12 14:36:24.095310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.403 [2024-07-12 14:36:24.095388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000050a cdw11:00000000 00:07:47.404 [2024-07-12 14:36:24.095404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.404 #31 NEW cov: 12078 ft: 14265 corp: 19/78b lim: 10 exec/s: 31 rss: 73Mb L: 10/10 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:47.404 [2024-07-12 14:36:24.144171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f138 cdw11:00000000 00:07:47.404 [2024-07-12 14:36:24.144196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.404 #32 NEW cov: 12078 ft: 14283 corp: 20/81b lim: 10 exec/s: 32 rss: 73Mb L: 3/10 MS: 1 ChangeASCIIInt- 00:07:47.663 [2024-07-12 14:36:24.205101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.205130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.205219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000125 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.205236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.205312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f1f1 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.205326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.205411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fff1 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.205426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.663 #33 NEW cov: 12078 ft: 14313 corp: 21/90b lim: 10 exec/s: 33 rss: 73Mb L: 9/10 MS: 1 CMP- DE: "\000\000\001%"- 00:07:47.663 [2024-07-12 14:36:24.255082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.255108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.255196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.255212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.255297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.255314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.663 #34 NEW cov: 12078 ft: 14326 corp: 22/96b lim: 10 exec/s: 34 rss: 73Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:47.663 [2024-07-12 14:36:24.305169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000013b9 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.305194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.305273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001313 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.305288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.305370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.305387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.663 #35 NEW cov: 12078 ft: 14342 corp: 23/102b lim: 10 exec/s: 35 rss: 73Mb L: 6/10 MS: 1 ChangeByte- 00:07:47.663 [2024-07-12 14:36:24.365364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bfc1 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.365390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.365482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000c1c1 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.365501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.663 [2024-07-12 14:36:24.365589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c1c1 cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.365607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.663 #39 NEW cov: 12078 ft: 14372 corp: 24/109b lim: 10 exec/s: 39 rss: 74Mb L: 7/10 MS: 4 EraseBytes-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:47.663 [2024-07-12 14:36:24.425102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:47.663 [2024-07-12 14:36:24.425128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.663 #40 NEW cov: 12078 ft: 14396 corp: 25/111b lim: 10 exec/s: 40 rss: 74Mb L: 2/10 MS: 1 CrossOver- 00:07:47.923 [2024-07-12 14:36:24.476374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.476403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.476493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.476509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.476613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.476631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.476716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.476732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.476815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000fff1 cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.476833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.923 #42 NEW cov: 12078 ft: 14401 corp: 26/121b lim: 10 exec/s: 42 rss: 74Mb L: 10/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:47.923 [2024-07-12 14:36:24.526356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.526386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.526481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.526497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.526584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff13 cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.526601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.526681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000013f1 cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.526698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.923 #43 NEW cov: 12078 ft: 14479 corp: 27/130b lim: 10 exec/s: 43 rss: 74Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:47.923 [2024-07-12 14:36:24.576081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000021ff cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.576108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.923 [2024-07-12 14:36:24.576201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f5ff cdw11:00000000 00:07:47.923 [2024-07-12 14:36:24.576219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.923 #44 NEW cov: 12078 ft: 14519 corp: 28/134b lim: 10 exec/s: 44 rss: 74Mb L: 4/10 MS: 1 ChangeBit- 00:07:47.924 [2024-07-12 14:36:24.636308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:47.924 [2024-07-12 14:36:24.636335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.924 [2024-07-12 14:36:24.636425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 00:07:47.924 [2024-07-12 14:36:24.636442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.924 #45 NEW cov: 12078 ft: 14525 corp: 29/138b lim: 10 exec/s: 45 rss: 74Mb L: 4/10 MS: 1 CopyPart- 00:07:47.924 [2024-07-12 14:36:24.697149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.924 [2024-07-12 14:36:24.697175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.924 [2024-07-12 14:36:24.697263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.924 [2024-07-12 14:36:24.697279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.924 [2024-07-12 14:36:24.697357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.924 [2024-07-12 14:36:24.697373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.924 [2024-07-12 14:36:24.697460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:47.924 [2024-07-12 14:36:24.697474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.924 [2024-07-12 14:36:24.697563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000fff1 cdw11:00000000 00:07:47.924 [2024-07-12 14:36:24.697578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.183 #46 NEW cov: 12078 ft: 14529 corp: 30/148b lim: 10 exec/s: 46 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:48.183 [2024-07-12 14:36:24.756399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000021f5 cdw11:00000000 00:07:48.183 [2024-07-12 14:36:24.756425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.183 #47 NEW cov: 12078 ft: 14546 corp: 31/151b lim: 10 exec/s: 47 rss: 74Mb L: 3/10 MS: 1 EraseBytes- 00:07:48.183 [2024-07-12 14:36:24.816825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f1ff cdw11:00000000 00:07:48.183 [2024-07-12 14:36:24.816850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.183 [2024-07-12 14:36:24.816935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f125 cdw11:00000000 00:07:48.183 [2024-07-12 14:36:24.816953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.183 #48 NEW cov: 12078 ft: 14602 corp: 32/155b lim: 10 exec/s: 24 rss: 74Mb L: 4/10 MS: 1 ChangeByte- 00:07:48.183 #48 DONE cov: 12078 ft: 14602 corp: 32/155b lim: 10 exec/s: 24 rss: 74Mb 00:07:48.183 ###### Recommended dictionary. ###### 00:07:48.183 "\000\000\001%" # Uses: 0 00:07:48.183 ###### End of recommended dictionary. ###### 00:07:48.183 Done 48 runs in 2 second(s) 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:48.183 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:48.442 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:48.442 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:48.442 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:48.442 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:48.442 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:48.442 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:48.442 14:36:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:48.442 [2024-07-12 14:36:25.008691] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:48.442 [2024-07-12 14:36:25.008763] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426175 ] 00:07:48.442 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.442 [2024-07-12 14:36:25.227333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.701 [2024-07-12 14:36:25.300755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.701 [2024-07-12 14:36:25.360022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.701 [2024-07-12 14:36:25.376223] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:48.701 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.701 INFO: Seed: 2855311892 00:07:48.701 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:48.701 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:48.701 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:48.701 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.701 #2 INITED exec/s: 0 rss: 65Mb 00:07:48.701 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.701 This may also happen if the target rejected all inputs we tried so far 00:07:48.701 [2024-07-12 14:36:25.441492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:48.701 [2024-07-12 14:36:25.441520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.269 NEW_FUNC[1/693]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:49.269 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:49.269 #3 NEW cov: 11830 ft: 11835 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:07:49.269 [2024-07-12 14:36:25.782800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.782859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.269 NEW_FUNC[1/1]: 0x1d97270 in spdk_thread_get_last_tsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1324 00:07:49.269 #5 NEW cov: 11964 ft: 12464 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 2 CrossOver-CopyPart- 00:07:49.269 [2024-07-12 14:36:25.832490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.832516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.269 #6 NEW cov: 11970 ft: 12643 corp: 4/7b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:07:49.269 [2024-07-12 14:36:25.872978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa6 cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.873003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.269 [2024-07-12 14:36:25.873069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a6a6 cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.873083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.269 [2024-07-12 14:36:25.873132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a6a6 cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.873146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.269 [2024-07-12 14:36:25.873195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a60a cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.873208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.269 #7 NEW cov: 12055 ft: 13184 corp: 5/15b lim: 10 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:49.269 [2024-07-12 14:36:25.922714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.922738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.269 #8 NEW cov: 12055 ft: 13313 corp: 6/17b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 1 CopyPart- 00:07:49.269 [2024-07-12 14:36:25.962885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:49.269 [2024-07-12 14:36:25.962910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.269 #9 NEW cov: 12055 ft: 13447 corp: 7/19b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 1 ChangeBinInt- 00:07:49.269 [2024-07-12 14:36:26.013002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:49.269 [2024-07-12 14:36:26.013026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.269 #10 NEW cov: 12055 ft: 13510 corp: 8/22b lim: 10 exec/s: 0 rss: 72Mb L: 3/8 MS: 1 CrossOver- 00:07:49.527 [2024-07-12 14:36:26.063154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000020a cdw11:00000000 00:07:49.527 [2024-07-12 14:36:26.063178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.527 #11 NEW cov: 12055 ft: 13659 corp: 9/24b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:49.527 [2024-07-12 14:36:26.103192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:49.527 [2024-07-12 14:36:26.103216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.527 #12 NEW cov: 12055 ft: 13712 corp: 10/26b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 1 CopyPart- 00:07:49.527 [2024-07-12 14:36:26.143318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0a cdw11:00000000 00:07:49.527 [2024-07-12 14:36:26.143342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.527 #13 NEW cov: 12055 ft: 13801 corp: 11/28b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 1 ChangeBit- 00:07:49.527 [2024-07-12 14:36:26.193454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002a02 cdw11:00000000 00:07:49.527 [2024-07-12 14:36:26.193480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.527 #14 NEW cov: 12055 ft: 13825 corp: 12/30b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 1 ChangeBit- 00:07:49.527 [2024-07-12 14:36:26.233686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:49.527 [2024-07-12 14:36:26.233712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.527 [2024-07-12 14:36:26.233763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000202 cdw11:00000000 00:07:49.527 [2024-07-12 14:36:26.233777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.527 #15 NEW cov: 12055 ft: 14038 corp: 13/34b lim: 10 exec/s: 0 rss: 72Mb L: 4/8 MS: 1 CopyPart- 00:07:49.527 [2024-07-12 14:36:26.283686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000410e cdw11:00000000 00:07:49.527 [2024-07-12 14:36:26.283712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.786 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:49.786 #16 NEW cov: 12078 ft: 14138 corp: 14/37b lim: 10 exec/s: 0 rss: 72Mb L: 3/8 MS: 1 InsertByte- 00:07:49.786 [2024-07-12 14:36:26.333843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:49.786 [2024-07-12 14:36:26.333868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.786 #17 NEW cov: 12078 ft: 14157 corp: 15/40b lim: 10 exec/s: 0 rss: 72Mb L: 3/8 MS: 1 EraseBytes- 00:07:49.786 [2024-07-12 14:36:26.383992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000eb32 cdw11:00000000 00:07:49.786 [2024-07-12 14:36:26.384019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.786 #21 NEW cov: 12078 ft: 14167 corp: 16/42b lim: 10 exec/s: 0 rss: 72Mb L: 2/8 MS: 4 EraseBytes-ChangeByte-ChangeByte-InsertByte- 00:07:49.786 [2024-07-12 14:36:26.424435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa6 cdw11:00000000 00:07:49.786 [2024-07-12 14:36:26.424461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.786 [2024-07-12 14:36:26.424514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a6a6 cdw11:00000000 00:07:49.786 [2024-07-12 14:36:26.424533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.787 [2024-07-12 14:36:26.424584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a6a6 cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.424601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.787 [2024-07-12 14:36:26.424654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a60a cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.424667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.787 #22 NEW cov: 12078 ft: 14193 corp: 17/50b lim: 10 exec/s: 22 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:49.787 [2024-07-12 14:36:26.474241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.474267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.787 #23 NEW cov: 12078 ft: 14244 corp: 18/52b lim: 10 exec/s: 23 rss: 73Mb L: 2/8 MS: 1 ShuffleBytes- 00:07:49.787 [2024-07-12 14:36:26.524738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.524763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.787 [2024-07-12 14:36:26.524815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.524828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.787 [2024-07-12 14:36:26.524878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.524892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.787 [2024-07-12 14:36:26.524942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.524955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.787 #25 NEW cov: 12078 ft: 14246 corp: 19/61b lim: 10 exec/s: 25 rss: 73Mb L: 9/9 MS: 2 ShuffleBytes-CMP- DE: "\001\000\000\000\000\000\000\006"- 00:07:49.787 [2024-07-12 14:36:26.564515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ccfd cdw11:00000000 00:07:49.787 [2024-07-12 14:36:26.564544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.046 #26 NEW cov: 12078 ft: 14251 corp: 20/63b lim: 10 exec/s: 26 rss: 73Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:50.046 [2024-07-12 14:36:26.614984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0a cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.615010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.046 [2024-07-12 14:36:26.615064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b4b4 cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.615078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.046 [2024-07-12 14:36:26.615146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b4b4 cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.615160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.046 [2024-07-12 14:36:26.615212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000b4b4 cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.615226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.046 #27 NEW cov: 12078 ft: 14271 corp: 21/72b lim: 10 exec/s: 27 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:50.046 [2024-07-12 14:36:26.654765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a5d cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.654790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.046 #28 NEW cov: 12078 ft: 14283 corp: 22/75b lim: 10 exec/s: 28 rss: 73Mb L: 3/9 MS: 1 InsertByte- 00:07:50.046 [2024-07-12 14:36:26.695256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.695281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.046 [2024-07-12 14:36:26.695331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.695345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.046 [2024-07-12 14:36:26.695396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.695409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.046 [2024-07-12 14:36:26.695459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.695472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.046 #29 NEW cov: 12078 ft: 14289 corp: 23/84b lim: 10 exec/s: 29 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:50.046 [2024-07-12 14:36:26.745110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.745134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.046 [2024-07-12 14:36:26.745185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.745199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.046 #30 NEW cov: 12078 ft: 14300 corp: 24/88b lim: 10 exec/s: 30 rss: 73Mb L: 4/9 MS: 1 CopyPart- 00:07:50.046 [2024-07-12 14:36:26.785132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b0e cdw11:00000000 00:07:50.046 [2024-07-12 14:36:26.785156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.046 #31 NEW cov: 12078 ft: 14318 corp: 25/91b lim: 10 exec/s: 31 rss: 73Mb L: 3/9 MS: 1 ChangeByte- 00:07:50.305 [2024-07-12 14:36:26.835304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005f0e cdw11:00000000 00:07:50.305 [2024-07-12 14:36:26.835329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.305 #32 NEW cov: 12078 ft: 14329 corp: 26/94b lim: 10 exec/s: 32 rss: 73Mb L: 3/9 MS: 1 ChangeBinInt- 00:07:50.305 [2024-07-12 14:36:26.885403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e71 cdw11:00000000 00:07:50.305 [2024-07-12 14:36:26.885427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.305 #33 NEW cov: 12078 ft: 14384 corp: 27/96b lim: 10 exec/s: 33 rss: 73Mb L: 2/9 MS: 1 ChangeByte- 00:07:50.305 [2024-07-12 14:36:26.925520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa6 cdw11:00000000 00:07:50.305 [2024-07-12 14:36:26.925550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.305 #34 NEW cov: 12078 ft: 14405 corp: 28/98b lim: 10 exec/s: 34 rss: 73Mb L: 2/9 MS: 1 CrossOver- 00:07:50.305 [2024-07-12 14:36:26.965656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:50.305 [2024-07-12 14:36:26.965683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.305 #35 NEW cov: 12078 ft: 14448 corp: 29/100b lim: 10 exec/s: 35 rss: 73Mb L: 2/9 MS: 1 CopyPart- 00:07:50.305 [2024-07-12 14:36:27.016116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa6 cdw11:00000000 00:07:50.305 [2024-07-12 14:36:27.016141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.305 [2024-07-12 14:36:27.016193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a6a6 cdw11:00000000 00:07:50.305 [2024-07-12 14:36:27.016206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.305 [2024-07-12 14:36:27.016256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a6a6 cdw11:00000000 00:07:50.305 [2024-07-12 14:36:27.016269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.305 [2024-07-12 14:36:27.016321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000097a6 cdw11:00000000 00:07:50.305 [2024-07-12 14:36:27.016334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.305 #36 NEW cov: 12078 ft: 14464 corp: 30/109b lim: 10 exec/s: 36 rss: 73Mb L: 9/9 MS: 1 InsertByte- 00:07:50.305 [2024-07-12 14:36:27.055903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cc0a cdw11:00000000 00:07:50.305 [2024-07-12 14:36:27.055927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.305 #37 NEW cov: 12078 ft: 14527 corp: 31/112b lim: 10 exec/s: 37 rss: 73Mb L: 3/9 MS: 1 CrossOver- 00:07:50.563 [2024-07-12 14:36:27.106172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a8a cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.106197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.106246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008a8a cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.106260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.564 #38 NEW cov: 12078 ft: 14542 corp: 32/117b lim: 10 exec/s: 38 rss: 73Mb L: 5/9 MS: 1 InsertRepeatedBytes- 00:07:50.564 [2024-07-12 14:36:27.146198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005bf2 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.146221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.564 #39 NEW cov: 12078 ft: 14554 corp: 33/120b lim: 10 exec/s: 39 rss: 73Mb L: 3/9 MS: 1 ChangeBinInt- 00:07:50.564 [2024-07-12 14:36:27.186776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.186800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.186851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.186864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.186914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.186930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.186978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.186991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.187039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000602 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.187053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.564 #40 NEW cov: 12078 ft: 14594 corp: 34/130b lim: 10 exec/s: 40 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\006"- 00:07:50.564 [2024-07-12 14:36:27.226420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.226445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.564 #41 NEW cov: 12078 ft: 14606 corp: 35/132b lim: 10 exec/s: 41 rss: 73Mb L: 2/10 MS: 1 EraseBytes- 00:07:50.564 [2024-07-12 14:36:27.266511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.266541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.564 #42 NEW cov: 12078 ft: 14613 corp: 36/135b lim: 10 exec/s: 42 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:07:50.564 [2024-07-12 14:36:27.306970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.306994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.307044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a64b cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.307057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.307108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004b4b cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.307121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.564 [2024-07-12 14:36:27.307170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004b4b cdw11:00000000 00:07:50.564 [2024-07-12 14:36:27.307182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.564 #43 NEW cov: 12078 ft: 14647 corp: 37/144b lim: 10 exec/s: 43 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:50.823 [2024-07-12 14:36:27.356781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:50.823 [2024-07-12 14:36:27.356805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.823 #44 NEW cov: 12078 ft: 14689 corp: 38/147b lim: 10 exec/s: 44 rss: 73Mb L: 3/10 MS: 1 CrossOver- 00:07:50.823 [2024-07-12 14:36:27.407116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:07:50.823 [2024-07-12 14:36:27.407142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.823 [2024-07-12 14:36:27.407194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c7c7 cdw11:00000000 00:07:50.823 [2024-07-12 14:36:27.407207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.823 [2024-07-12 14:36:27.407274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c7c7 cdw11:00000000 00:07:50.823 [2024-07-12 14:36:27.407288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.823 #45 NEW cov: 12078 ft: 14884 corp: 39/154b lim: 10 exec/s: 22 rss: 73Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:50.823 #45 DONE cov: 12078 ft: 14884 corp: 39/154b lim: 10 exec/s: 22 rss: 73Mb 00:07:50.823 ###### Recommended dictionary. ###### 00:07:50.823 "\001\000\000\000\000\000\000\006" # Uses: 1 00:07:50.823 ###### End of recommended dictionary. ###### 00:07:50.823 Done 45 runs in 2 second(s) 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:50.823 14:36:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:51.082 [2024-07-12 14:36:27.611851] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:51.082 [2024-07-12 14:36:27.611924] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426476 ] 00:07:51.082 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.082 [2024-07-12 14:36:27.824503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.341 [2024-07-12 14:36:27.898475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.341 [2024-07-12 14:36:27.957910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.341 [2024-07-12 14:36:27.974114] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:51.341 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.341 INFO: Seed: 1158358041 00:07:51.341 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:51.341 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:51.341 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:51.341 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.341 [2024-07-12 14:36:28.039485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.341 [2024-07-12 14:36:28.039513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.341 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:51.341 [2024-07-12 14:36:28.079674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.341 [2024-07-12 14:36:28.079700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.341 [2024-07-12 14:36:28.079770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.341 [2024-07-12 14:36:28.079785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.341 #3 NEW cov: 11992 ft: 13178 corp: 2/3b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:07:51.664 [2024-07-12 14:36:28.129889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.129914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.129971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.129985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.664 #4 NEW cov: 11998 ft: 13382 corp: 3/5b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ChangeByte- 00:07:51.664 [2024-07-12 14:36:28.180084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.180109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.180180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.180195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.180248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.180262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.664 #5 NEW cov: 12083 ft: 13788 corp: 4/8b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 InsertByte- 00:07:51.664 [2024-07-12 14:36:28.230065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.230092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.230149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.230163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.664 #6 NEW cov: 12083 ft: 13910 corp: 5/10b lim: 5 exec/s: 0 rss: 71Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:51.664 [2024-07-12 14:36:28.270376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.270400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.270471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.270485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.270545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.270559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.664 #7 NEW cov: 12083 ft: 13977 corp: 6/13b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:07:51.664 [2024-07-12 14:36:28.320461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.320486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.320550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.320564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.664 [2024-07-12 14:36:28.320620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.664 [2024-07-12 14:36:28.320634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.664 #8 NEW cov: 12083 ft: 14103 corp: 7/16b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ShuffleBytes- 00:07:51.665 [2024-07-12 14:36:28.360412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.665 [2024-07-12 14:36:28.360436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.665 [2024-07-12 14:36:28.360492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.665 [2024-07-12 14:36:28.360506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.665 #9 NEW cov: 12083 ft: 14133 corp: 8/18b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 EraseBytes- 00:07:51.665 [2024-07-12 14:36:28.411016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.665 [2024-07-12 14:36:28.411042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.665 [2024-07-12 14:36:28.411094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.665 [2024-07-12 14:36:28.411108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.665 [2024-07-12 14:36:28.411161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.665 [2024-07-12 14:36:28.411175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.923 #10 NEW cov: 12083 ft: 14287 corp: 9/21b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ChangeBit- 00:07:51.923 [2024-07-12 14:36:28.450682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.450707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.923 [2024-07-12 14:36:28.450763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.450777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.923 #11 NEW cov: 12083 ft: 14339 corp: 10/23b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeBinInt- 00:07:51.923 [2024-07-12 14:36:28.500986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.501010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.923 [2024-07-12 14:36:28.501066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.501080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.923 [2024-07-12 14:36:28.501136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.501149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.923 #12 NEW cov: 12083 ft: 14347 corp: 11/26b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 ChangeBinInt- 00:07:51.923 [2024-07-12 14:36:28.550970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.550994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.923 [2024-07-12 14:36:28.551063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.551077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.923 #13 NEW cov: 12083 ft: 14358 corp: 12/28b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 CopyPart- 00:07:51.923 [2024-07-12 14:36:28.591090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.591115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.923 [2024-07-12 14:36:28.591171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.923 [2024-07-12 14:36:28.591184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.923 #14 NEW cov: 12083 ft: 14382 corp: 13/30b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeByte- 00:07:51.924 [2024-07-12 14:36:28.631401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.924 [2024-07-12 14:36:28.631426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.924 [2024-07-12 14:36:28.631484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.924 [2024-07-12 14:36:28.631498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.924 [2024-07-12 14:36:28.631572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.924 [2024-07-12 14:36:28.631587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.924 #15 NEW cov: 12083 ft: 14445 corp: 14/33b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CopyPart- 00:07:51.924 [2024-07-12 14:36:28.671372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.924 [2024-07-12 14:36:28.671396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.924 [2024-07-12 14:36:28.671452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.924 [2024-07-12 14:36:28.671466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.924 #16 NEW cov: 12083 ft: 14456 corp: 15/35b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:52.183 [2024-07-12 14:36:28.721499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.721524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.721590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.721604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.183 #17 NEW cov: 12083 ft: 14517 corp: 16/37b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeBit- 00:07:52.183 [2024-07-12 14:36:28.761730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.761755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.761825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.761840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.761897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.761911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.183 #18 NEW cov: 12083 ft: 14523 corp: 17/40b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CopyPart- 00:07:52.183 [2024-07-12 14:36:28.811946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.811970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.812043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.812060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.812116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.812130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.183 #19 NEW cov: 12083 ft: 14562 corp: 18/43b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CopyPart- 00:07:52.183 [2024-07-12 14:36:28.852044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.852069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.852125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.852138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.852194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.852207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.183 #20 NEW cov: 12083 ft: 14567 corp: 19/46b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 CrossOver- 00:07:52.183 [2024-07-12 14:36:28.892109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.892132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.892187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.892200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.183 [2024-07-12 14:36:28.892270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.183 [2024-07-12 14:36:28.892284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.442 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:52.442 #21 NEW cov: 12106 ft: 14635 corp: 20/49b lim: 5 exec/s: 21 rss: 73Mb L: 3/3 MS: 1 CopyPart- 00:07:52.701 [2024-07-12 14:36:29.233462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.233525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.233636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.233663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.233757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.233784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.701 #22 NEW cov: 12106 ft: 14673 corp: 21/52b lim: 5 exec/s: 22 rss: 73Mb L: 3/3 MS: 1 ChangeBit- 00:07:52.701 [2024-07-12 14:36:29.293073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.293099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.293158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.293173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.701 #23 NEW cov: 12106 ft: 14696 corp: 22/54b lim: 5 exec/s: 23 rss: 73Mb L: 2/3 MS: 1 ChangeBit- 00:07:52.701 [2024-07-12 14:36:29.343237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.343262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.343320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.343334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.701 #24 NEW cov: 12106 ft: 14724 corp: 23/56b lim: 5 exec/s: 24 rss: 73Mb L: 2/3 MS: 1 CrossOver- 00:07:52.701 [2024-07-12 14:36:29.393704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.393729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.393787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.393801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.393858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.393871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.393923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.393936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.701 #25 NEW cov: 12106 ft: 15005 corp: 24/60b lim: 5 exec/s: 25 rss: 73Mb L: 4/4 MS: 1 InsertByte- 00:07:52.701 [2024-07-12 14:36:29.443698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.443725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.443782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.443796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.443853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.443871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.701 #26 NEW cov: 12106 ft: 15017 corp: 25/63b lim: 5 exec/s: 26 rss: 73Mb L: 3/4 MS: 1 CrossOver- 00:07:52.701 [2024-07-12 14:36:29.483809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.483835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.483897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.483911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.701 [2024-07-12 14:36:29.483971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.701 [2024-07-12 14:36:29.483985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.988 #27 NEW cov: 12106 ft: 15024 corp: 26/66b lim: 5 exec/s: 27 rss: 73Mb L: 3/4 MS: 1 ChangeBit- 00:07:52.988 [2024-07-12 14:36:29.524098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.988 [2024-07-12 14:36:29.524124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.988 [2024-07-12 14:36:29.524196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.988 [2024-07-12 14:36:29.524211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.988 [2024-07-12 14:36:29.524266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.988 [2024-07-12 14:36:29.524280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.524337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.524350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.989 #28 NEW cov: 12106 ft: 15038 corp: 27/70b lim: 5 exec/s: 28 rss: 74Mb L: 4/4 MS: 1 InsertByte- 00:07:52.989 [2024-07-12 14:36:29.574019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.574044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.574099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.574113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.574167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.574181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.989 #29 NEW cov: 12106 ft: 15051 corp: 28/73b lim: 5 exec/s: 29 rss: 74Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:52.989 [2024-07-12 14:36:29.614325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.614350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.614409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.614423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.614481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.614495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.614555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.614569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.989 #30 NEW cov: 12106 ft: 15063 corp: 29/77b lim: 5 exec/s: 30 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:52.989 [2024-07-12 14:36:29.654258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.654283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.654338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.654352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.654406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.654420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.989 #31 NEW cov: 12106 ft: 15083 corp: 30/80b lim: 5 exec/s: 31 rss: 74Mb L: 3/4 MS: 1 ChangeByte- 00:07:52.989 [2024-07-12 14:36:29.694385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.694410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.694465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.694479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.694537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.694551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.989 #32 NEW cov: 12106 ft: 15099 corp: 31/83b lim: 5 exec/s: 32 rss: 74Mb L: 3/4 MS: 1 ChangeByte- 00:07:52.989 [2024-07-12 14:36:29.744433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.744458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.989 [2024-07-12 14:36:29.744521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.989 [2024-07-12 14:36:29.744540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.989 #33 NEW cov: 12106 ft: 15164 corp: 32/85b lim: 5 exec/s: 33 rss: 74Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:53.248 [2024-07-12 14:36:29.784859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.784884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.784941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.784954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.785011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.785025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.785083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.785097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.248 #34 NEW cov: 12106 ft: 15229 corp: 33/89b lim: 5 exec/s: 34 rss: 74Mb L: 4/4 MS: 1 InsertByte- 00:07:53.248 [2024-07-12 14:36:29.834820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.834845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.834917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.834931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.834988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.835002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.248 #35 NEW cov: 12106 ft: 15237 corp: 34/92b lim: 5 exec/s: 35 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:07:53.248 [2024-07-12 14:36:29.875097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.875122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.875181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.875195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.875250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.875264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.875323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.875337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.248 #36 NEW cov: 12106 ft: 15260 corp: 35/96b lim: 5 exec/s: 36 rss: 74Mb L: 4/4 MS: 1 InsertByte- 00:07:53.248 [2024-07-12 14:36:29.915038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.915063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.915116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.915129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:29.915187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.915200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.248 #37 NEW cov: 12106 ft: 15268 corp: 36/99b lim: 5 exec/s: 37 rss: 74Mb L: 3/4 MS: 1 ChangeByte- 00:07:53.248 [2024-07-12 14:36:29.964874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:29.964898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.248 #38 NEW cov: 12106 ft: 15298 corp: 37/100b lim: 5 exec/s: 38 rss: 74Mb L: 1/4 MS: 1 EraseBytes- 00:07:53.248 [2024-07-12 14:36:30.005091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:30.005126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.248 [2024-07-12 14:36:30.005209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.248 [2024-07-12 14:36:30.005231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.248 #39 NEW cov: 12106 ft: 15379 corp: 38/102b lim: 5 exec/s: 19 rss: 74Mb L: 2/4 MS: 1 ChangeByte- 00:07:53.248 #39 DONE cov: 12106 ft: 15379 corp: 38/102b lim: 5 exec/s: 19 rss: 74Mb 00:07:53.248 Done 39 runs in 2 second(s) 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:53.507 14:36:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:53.507 [2024-07-12 14:36:30.209657] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:53.507 [2024-07-12 14:36:30.209731] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426824 ] 00:07:53.507 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.765 [2024-07-12 14:36:30.421772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.765 [2024-07-12 14:36:30.499094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.023 [2024-07-12 14:36:30.558665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.023 [2024-07-12 14:36:30.574885] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:54.023 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.023 INFO: Seed: 3757345686 00:07:54.023 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:54.023 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:54.023 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:54.023 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.023 [2024-07-12 14:36:30.646025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.023 [2024-07-12 14:36:30.646068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.023 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 70Mb 00:07:54.023 [2024-07-12 14:36:30.695825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.023 [2024-07-12 14:36:30.695853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.023 #3 NEW cov: 11992 ft: 12246 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBit- 00:07:54.023 [2024-07-12 14:36:30.756494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.023 [2024-07-12 14:36:30.756522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.023 [2024-07-12 14:36:30.756612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.023 [2024-07-12 14:36:30.756630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.023 #4 NEW cov: 11998 ft: 13232 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:07:54.023 [2024-07-12 14:36:30.806293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.023 [2024-07-12 14:36:30.806319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.280 #5 NEW cov: 12083 ft: 13452 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:07:54.280 [2024-07-12 14:36:30.856838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.280 [2024-07-12 14:36:30.856864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.280 [2024-07-12 14:36:30.856963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.280 [2024-07-12 14:36:30.856980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.280 #6 NEW cov: 12083 ft: 13495 corp: 5/7b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:07:54.280 [2024-07-12 14:36:30.916795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.280 [2024-07-12 14:36:30.916823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.280 #7 NEW cov: 12083 ft: 13570 corp: 6/8b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 EraseBytes- 00:07:54.280 [2024-07-12 14:36:30.977088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.280 [2024-07-12 14:36:30.977114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.280 #8 NEW cov: 12083 ft: 13644 corp: 7/9b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:54.280 [2024-07-12 14:36:31.037387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.280 [2024-07-12 14:36:31.037414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.538 #9 NEW cov: 12083 ft: 13704 corp: 8/10b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:54.539 [2024-07-12 14:36:31.097642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.097669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.539 #10 NEW cov: 12083 ft: 13769 corp: 9/11b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeBinInt- 00:07:54.539 [2024-07-12 14:36:31.158098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.158123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.539 [2024-07-12 14:36:31.158220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.158237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.539 #11 NEW cov: 12083 ft: 13843 corp: 10/13b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBinInt- 00:07:54.539 [2024-07-12 14:36:31.218485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.218515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.539 [2024-07-12 14:36:31.218628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.218646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.539 #12 NEW cov: 12083 ft: 13873 corp: 11/15b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:07:54.539 [2024-07-12 14:36:31.279692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.279717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.539 [2024-07-12 14:36:31.279806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.279823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.539 [2024-07-12 14:36:31.279915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.279932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.539 [2024-07-12 14:36:31.280022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.280040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.539 [2024-07-12 14:36:31.280129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.539 [2024-07-12 14:36:31.280145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.539 #13 NEW cov: 12083 ft: 14264 corp: 12/20b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:54.797 [2024-07-12 14:36:31.349171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.349198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.797 [2024-07-12 14:36:31.349299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.349315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.797 #14 NEW cov: 12083 ft: 14389 corp: 13/22b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:54.797 [2024-07-12 14:36:31.400542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.400567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.797 [2024-07-12 14:36:31.400666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.400681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.797 [2024-07-12 14:36:31.400774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.400793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.797 [2024-07-12 14:36:31.400881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.400897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.797 [2024-07-12 14:36:31.400985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.401001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.797 #15 NEW cov: 12083 ft: 14411 corp: 14/27b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:54.797 [2024-07-12 14:36:31.469764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.469791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.797 [2024-07-12 14:36:31.469882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.797 [2024-07-12 14:36:31.469899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.054 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:55.054 #16 NEW cov: 12106 ft: 14494 corp: 15/29b lim: 5 exec/s: 16 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:07:55.054 [2024-07-12 14:36:31.831385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.054 [2024-07-12 14:36:31.831425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.054 [2024-07-12 14:36:31.831539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.054 [2024-07-12 14:36:31.831557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.054 [2024-07-12 14:36:31.831650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.054 [2024-07-12 14:36:31.831667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.312 #17 NEW cov: 12106 ft: 14719 corp: 16/32b lim: 5 exec/s: 17 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:55.312 [2024-07-12 14:36:31.900949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.312 [2024-07-12 14:36:31.900978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.312 #18 NEW cov: 12106 ft: 14794 corp: 17/33b lim: 5 exec/s: 18 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:55.312 [2024-07-12 14:36:31.952086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.312 [2024-07-12 14:36:31.952113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.312 [2024-07-12 14:36:31.952211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.312 [2024-07-12 14:36:31.952231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.312 [2024-07-12 14:36:31.952324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.312 [2024-07-12 14:36:31.952343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.312 #19 NEW cov: 12106 ft: 14813 corp: 18/36b lim: 5 exec/s: 19 rss: 73Mb L: 3/5 MS: 1 ChangeByte- 00:07:55.312 [2024-07-12 14:36:32.021939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.312 [2024-07-12 14:36:32.021967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.312 [2024-07-12 14:36:32.072143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.312 [2024-07-12 14:36:32.072174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.312 #21 NEW cov: 12106 ft: 14827 corp: 19/37b lim: 5 exec/s: 21 rss: 73Mb L: 1/5 MS: 2 ShuffleBytes-ChangeBit- 00:07:55.570 [2024-07-12 14:36:32.122673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.122711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.570 [2024-07-12 14:36:32.122805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.122821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.570 #22 NEW cov: 12106 ft: 14835 corp: 20/39b lim: 5 exec/s: 22 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:07:55.570 [2024-07-12 14:36:32.182912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.182939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.570 [2024-07-12 14:36:32.183035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.183051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.570 #23 NEW cov: 12106 ft: 14859 corp: 21/41b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:07:55.570 [2024-07-12 14:36:32.233127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.233152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.570 [2024-07-12 14:36:32.233258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.233275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.570 #24 NEW cov: 12106 ft: 14873 corp: 22/43b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:07:55.570 [2024-07-12 14:36:32.293348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.293378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.570 [2024-07-12 14:36:32.293467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.293484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.570 #25 NEW cov: 12106 ft: 14891 corp: 23/45b lim: 5 exec/s: 25 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:07:55.570 [2024-07-12 14:36:32.343995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.344020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.570 [2024-07-12 14:36:32.344106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.344122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.570 [2024-07-12 14:36:32.344219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.570 [2024-07-12 14:36:32.344236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.829 #26 NEW cov: 12106 ft: 14902 corp: 24/48b lim: 5 exec/s: 26 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:07:55.829 [2024-07-12 14:36:32.393766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.393791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.829 [2024-07-12 14:36:32.393892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.393908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.829 #27 NEW cov: 12106 ft: 14918 corp: 25/50b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:55.829 [2024-07-12 14:36:32.454236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.454262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.829 [2024-07-12 14:36:32.454355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.454373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.829 #28 NEW cov: 12106 ft: 14951 corp: 26/52b lim: 5 exec/s: 28 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:07:55.829 [2024-07-12 14:36:32.504755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.504781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.829 [2024-07-12 14:36:32.504887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.504905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.829 [2024-07-12 14:36:32.505004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.505024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.829 #29 NEW cov: 12106 ft: 14983 corp: 27/55b lim: 5 exec/s: 29 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:07:55.829 [2024-07-12 14:36:32.574282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.829 [2024-07-12 14:36:32.574308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.829 #30 NEW cov: 12106 ft: 14993 corp: 28/56b lim: 5 exec/s: 30 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:56.087 [2024-07-12 14:36:32.624467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.087 [2024-07-12 14:36:32.624493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.087 #31 NEW cov: 12106 ft: 15014 corp: 29/57b lim: 5 exec/s: 15 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:07:56.087 #31 DONE cov: 12106 ft: 15014 corp: 29/57b lim: 5 exec/s: 15 rss: 73Mb 00:07:56.087 Done 31 runs in 2 second(s) 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:56.087 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:56.088 14:36:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:56.088 [2024-07-12 14:36:32.822245] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:56.088 [2024-07-12 14:36:32.822315] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427183 ] 00:07:56.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.346 [2024-07-12 14:36:33.034577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.346 [2024-07-12 14:36:33.108128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.603 [2024-07-12 14:36:33.167991] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.603 [2024-07-12 14:36:33.184194] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:56.603 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.603 INFO: Seed: 2071401684 00:07:56.603 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:56.603 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:56.603 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:56.603 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.603 #2 INITED exec/s: 0 rss: 65Mb 00:07:56.603 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.603 This may also happen if the target rejected all inputs we tried so far 00:07:56.603 [2024-07-12 14:36:33.255622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.603 [2024-07-12 14:36:33.255667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.603 [2024-07-12 14:36:33.255790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.603 [2024-07-12 14:36:33.255809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.603 [2024-07-12 14:36:33.255918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.603 [2024-07-12 14:36:33.255935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.860 NEW_FUNC[1/694]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:56.860 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.860 #9 NEW cov: 11881 ft: 11881 corp: 2/26b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:56.860 [2024-07-12 14:36:33.606043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.860 [2024-07-12 14:36:33.606086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.860 [2024-07-12 14:36:33.606179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:09000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.860 [2024-07-12 14:36:33.606198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.860 [2024-07-12 14:36:33.606292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.860 [2024-07-12 14:36:33.606309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.860 NEW_FUNC[1/1]: 0x101b660 in _sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1331 00:07:56.860 #10 NEW cov: 12015 ft: 12500 corp: 3/51b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:57.117 [2024-07-12 14:36:33.676300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.676328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.117 [2024-07-12 14:36:33.676424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.676439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.117 [2024-07-12 14:36:33.676532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.676547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.117 #11 NEW cov: 12021 ft: 12647 corp: 4/77b lim: 40 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 CrossOver- 00:07:57.117 [2024-07-12 14:36:33.725913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.725940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.117 #12 NEW cov: 12106 ft: 13367 corp: 5/91b lim: 40 exec/s: 0 rss: 72Mb L: 14/26 MS: 1 EraseBytes- 00:07:57.117 [2024-07-12 14:36:33.786861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.786887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.117 [2024-07-12 14:36:33.786991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:09000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.787008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.117 [2024-07-12 14:36:33.787103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.787118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.117 #13 NEW cov: 12106 ft: 13533 corp: 6/116b lim: 40 exec/s: 0 rss: 72Mb L: 25/26 MS: 1 ShuffleBytes- 00:07:57.117 [2024-07-12 14:36:33.836929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.836956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.117 [2024-07-12 14:36:33.837059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0900ffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.837075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.117 #14 NEW cov: 12106 ft: 13807 corp: 7/132b lim: 40 exec/s: 0 rss: 72Mb L: 16/26 MS: 1 EraseBytes- 00:07:57.117 [2024-07-12 14:36:33.897097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.117 [2024-07-12 14:36:33.897124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.117 [2024-07-12 14:36:33.897218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:09000aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.118 [2024-07-12 14:36:33.897234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.375 #15 NEW cov: 12106 ft: 13890 corp: 8/149b lim: 40 exec/s: 0 rss: 72Mb L: 17/26 MS: 1 CrossOver- 00:07:57.375 [2024-07-12 14:36:33.957439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff00 cdw11:ff09ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:33.957466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.375 [2024-07-12 14:36:33.957581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:33.957599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.375 #16 NEW cov: 12106 ft: 13929 corp: 9/166b lim: 40 exec/s: 0 rss: 72Mb L: 17/26 MS: 1 ShuffleBytes- 00:07:57.375 [2024-07-12 14:36:34.018065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:34.018093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.375 [2024-07-12 14:36:34.018187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff010000 cdw11:00023378 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:34.018204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.375 [2024-07-12 14:36:34.018302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:a90affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:34.018317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.375 [2024-07-12 14:36:34.018421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:34.018437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.375 #17 NEW cov: 12106 ft: 14398 corp: 10/200b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CMP- DE: "\001\000\000\000\0023x\251"- 00:07:57.375 [2024-07-12 14:36:34.077963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:34.077989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.375 [2024-07-12 14:36:34.078085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:09001000 cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:34.078102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.375 #18 NEW cov: 12106 ft: 14436 corp: 11/216b lim: 40 exec/s: 0 rss: 73Mb L: 16/34 MS: 1 ChangeBinInt- 00:07:57.375 [2024-07-12 14:36:34.128275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.375 [2024-07-12 14:36:34.128301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.376 [2024-07-12 14:36:34.128393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:023378a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.376 [2024-07-12 14:36:34.128410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.634 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:57.634 #19 NEW cov: 12129 ft: 14487 corp: 12/232b lim: 40 exec/s: 0 rss: 73Mb L: 16/34 MS: 1 PersAutoDict- DE: "\001\000\000\000\0023x\251"- 00:07:57.634 [2024-07-12 14:36:34.188837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.188865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.188963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.188981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.634 #20 NEW cov: 12129 ft: 14514 corp: 13/250b lim: 40 exec/s: 0 rss: 73Mb L: 18/34 MS: 1 CMP- DE: "\017\000\000\000"- 00:07:57.634 [2024-07-12 14:36:34.249244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:11ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.249270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.249381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:09001000 cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.249398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.634 #21 NEW cov: 12129 ft: 14531 corp: 14/266b lim: 40 exec/s: 21 rss: 73Mb L: 16/34 MS: 1 ChangeByte- 00:07:57.634 [2024-07-12 14:36:34.299815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.299842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.299940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff00ff09 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.299957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.300049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0affffff cdw11:09000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.300065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.634 #22 NEW cov: 12129 ft: 14538 corp: 15/293b lim: 40 exec/s: 22 rss: 73Mb L: 27/34 MS: 1 CrossOver- 00:07:57.634 [2024-07-12 14:36:34.350533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.350562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.350659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:3378a90a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.350676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.350766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.350783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.350873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.350891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.634 #23 NEW cov: 12129 ft: 14551 corp: 16/327b lim: 40 exec/s: 23 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:07:57.634 [2024-07-12 14:36:34.410431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff23 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.410459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.634 [2024-07-12 14:36:34.410555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff0900ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.634 [2024-07-12 14:36:34.410590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.893 #24 NEW cov: 12129 ft: 14562 corp: 17/344b lim: 40 exec/s: 24 rss: 73Mb L: 17/34 MS: 1 InsertByte- 00:07:57.893 [2024-07-12 14:36:34.460945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.460973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.893 [2024-07-12 14:36:34.461081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.461098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.893 [2024-07-12 14:36:34.461194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.461209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.893 #25 NEW cov: 12129 ft: 14568 corp: 18/373b lim: 40 exec/s: 25 rss: 73Mb L: 29/34 MS: 1 InsertRepeatedBytes- 00:07:57.893 [2024-07-12 14:36:34.510996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.511024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.893 [2024-07-12 14:36:34.511130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.511147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.893 [2024-07-12 14:36:34.511237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff41ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.511255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.893 #26 NEW cov: 12129 ft: 14583 corp: 19/399b lim: 40 exec/s: 26 rss: 73Mb L: 26/34 MS: 1 ChangeByte- 00:07:57.893 [2024-07-12 14:36:34.561030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:0200ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.561059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.893 [2024-07-12 14:36:34.561150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:09001000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.561168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.893 #27 NEW cov: 12129 ft: 14593 corp: 20/419b lim: 40 exec/s: 27 rss: 73Mb L: 20/34 MS: 1 CMP- DE: "\000\000\002\000"- 00:07:57.893 [2024-07-12 14:36:34.611092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.611121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.893 [2024-07-12 14:36:34.611219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:023478a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.893 [2024-07-12 14:36:34.611236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.893 #28 NEW cov: 12129 ft: 14628 corp: 21/435b lim: 40 exec/s: 28 rss: 73Mb L: 16/34 MS: 1 ChangeASCIIInt- 00:07:58.152 [2024-07-12 14:36:34.681849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.152 [2024-07-12 14:36:34.681879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.152 [2024-07-12 14:36:34.681974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff00ff09 cdw11:2bffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.152 [2024-07-12 14:36:34.681992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.152 [2024-07-12 14:36:34.682090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff0affff cdw11:ff09000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.152 [2024-07-12 14:36:34.682107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.152 #29 NEW cov: 12129 ft: 14668 corp: 22/463b lim: 40 exec/s: 29 rss: 73Mb L: 28/34 MS: 1 InsertByte- 00:07:58.153 [2024-07-12 14:36:34.741604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.741631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.153 [2024-07-12 14:36:34.741735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000233 cdw11:78a9ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.741751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.153 #30 NEW cov: 12129 ft: 14704 corp: 23/480b lim: 40 exec/s: 30 rss: 73Mb L: 17/34 MS: 1 PersAutoDict- DE: "\001\000\000\000\0023x\251"- 00:07:58.153 [2024-07-12 14:36:34.791871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff10 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.791898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.153 [2024-07-12 14:36:34.791995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:023478a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.792012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.153 #31 NEW cov: 12129 ft: 14713 corp: 24/496b lim: 40 exec/s: 31 rss: 73Mb L: 16/34 MS: 1 ChangeBinInt- 00:07:58.153 [2024-07-12 14:36:34.852737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.852764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.153 [2024-07-12 14:36:34.852860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.852881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.153 [2024-07-12 14:36:34.852972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.852987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.153 #32 NEW cov: 12129 ft: 14847 corp: 25/521b lim: 40 exec/s: 32 rss: 73Mb L: 25/34 MS: 1 ShuffleBytes- 00:07:58.153 [2024-07-12 14:36:34.902843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.902869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.153 [2024-07-12 14:36:34.902973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff000009 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.902990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.153 [2024-07-12 14:36:34.903080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.153 [2024-07-12 14:36:34.903096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.153 #33 NEW cov: 12129 ft: 14865 corp: 26/546b lim: 40 exec/s: 33 rss: 73Mb L: 25/34 MS: 1 ShuffleBytes- 00:07:58.412 [2024-07-12 14:36:34.952990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff8fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:34.953017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:34.953107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:09001000 cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:34.953124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.412 #34 NEW cov: 12129 ft: 14905 corp: 27/562b lim: 40 exec/s: 34 rss: 73Mb L: 16/34 MS: 1 ChangeByte- 00:07:58.412 [2024-07-12 14:36:35.003354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff01 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.003381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.003479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff000900 cdw11:023478a9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.003509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.412 #35 NEW cov: 12129 ft: 14918 corp: 28/578b lim: 40 exec/s: 35 rss: 73Mb L: 16/34 MS: 1 ChangeBinInt- 00:07:58.412 [2024-07-12 14:36:35.054270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.054297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.054392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.054408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.054504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.054521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.054620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.054636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.412 #36 NEW cov: 12129 ft: 14940 corp: 29/612b lim: 40 exec/s: 36 rss: 73Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:58.412 [2024-07-12 14:36:35.104584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.104610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.104704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff01257d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.104721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.104807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:60bd417d cdw11:22ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.104823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.104915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.104931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.412 #37 NEW cov: 12129 ft: 14960 corp: 30/645b lim: 40 exec/s: 37 rss: 73Mb L: 33/34 MS: 1 CMP- DE: "\001%}`\275A}\""- 00:07:58.412 [2024-07-12 14:36:35.164769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.164796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.164895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.164911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.412 [2024-07-12 14:36:35.165006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.412 [2024-07-12 14:36:35.165022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.412 #38 NEW cov: 12129 ft: 14967 corp: 31/670b lim: 40 exec/s: 38 rss: 73Mb L: 25/34 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:07:58.671 [2024-07-12 14:36:35.215318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.671 [2024-07-12 14:36:35.215345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.671 [2024-07-12 14:36:35.215437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.671 [2024-07-12 14:36:35.215457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.671 [2024-07-12 14:36:35.215548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.671 [2024-07-12 14:36:35.215576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.671 [2024-07-12 14:36:35.215669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00023378 cdw11:a941ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.671 [2024-07-12 14:36:35.215685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.671 #39 NEW cov: 12129 ft: 14986 corp: 32/704b lim: 40 exec/s: 19 rss: 73Mb L: 34/34 MS: 1 PersAutoDict- DE: "\001\000\000\000\0023x\251"- 00:07:58.671 #39 DONE cov: 12129 ft: 14986 corp: 32/704b lim: 40 exec/s: 19 rss: 73Mb 00:07:58.671 ###### Recommended dictionary. ###### 00:07:58.671 "\001\000\000\000\0023x\251" # Uses: 3 00:07:58.671 "\017\000\000\000" # Uses: 1 00:07:58.671 "\000\000\002\000" # Uses: 0 00:07:58.671 "\001%}`\275A}\"" # Uses: 0 00:07:58.671 ###### End of recommended dictionary. ###### 00:07:58.671 Done 39 runs in 2 second(s) 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:58.672 14:36:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:58.672 [2024-07-12 14:36:35.425510] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:07:58.672 [2024-07-12 14:36:35.425588] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427539 ] 00:07:58.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.930 [2024-07-12 14:36:35.638989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.930 [2024-07-12 14:36:35.712297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.189 [2024-07-12 14:36:35.772001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.189 [2024-07-12 14:36:35.788199] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:59.189 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.189 INFO: Seed: 382414657 00:07:59.190 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:07:59.190 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:07:59.190 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:59.190 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.190 #2 INITED exec/s: 0 rss: 65Mb 00:07:59.190 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.190 This may also happen if the target rejected all inputs we tried so far 00:07:59.190 [2024-07-12 14:36:35.853560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.190 [2024-07-12 14:36:35.853591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.448 NEW_FUNC[1/696]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:59.448 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.448 #3 NEW cov: 11897 ft: 11898 corp: 2/10b lim: 40 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:59.448 [2024-07-12 14:36:36.194706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.448 [2024-07-12 14:36:36.194772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.706 #4 NEW cov: 12027 ft: 12444 corp: 3/19b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:59.706 [2024-07-12 14:36:36.254599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00200000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.706 [2024-07-12 14:36:36.254626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.706 #10 NEW cov: 12033 ft: 12777 corp: 4/28b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:07:59.706 [2024-07-12 14:36:36.304758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.706 [2024-07-12 14:36:36.304783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.706 #11 NEW cov: 12118 ft: 13121 corp: 5/37b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:07:59.706 [2024-07-12 14:36:36.344834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.706 [2024-07-12 14:36:36.344858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.706 #12 NEW cov: 12118 ft: 13367 corp: 6/46b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:59.706 [2024-07-12 14:36:36.384998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.706 [2024-07-12 14:36:36.385023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.706 #18 NEW cov: 12118 ft: 13506 corp: 7/55b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:07:59.706 [2024-07-12 14:36:36.435078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.706 [2024-07-12 14:36:36.435105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.706 #19 NEW cov: 12118 ft: 13564 corp: 8/64b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:59.706 [2024-07-12 14:36:36.475191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000007e cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.706 [2024-07-12 14:36:36.475216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.964 #21 NEW cov: 12118 ft: 13580 corp: 9/72b lim: 40 exec/s: 0 rss: 72Mb L: 8/9 MS: 2 EraseBytes-InsertByte- 00:07:59.964 [2024-07-12 14:36:36.515359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:0000fffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.964 [2024-07-12 14:36:36.515384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.964 #22 NEW cov: 12118 ft: 13600 corp: 10/81b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:59.964 [2024-07-12 14:36:36.565485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.964 [2024-07-12 14:36:36.565509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.964 #23 NEW cov: 12118 ft: 13680 corp: 11/90b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:07:59.964 [2024-07-12 14:36:36.605630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.964 [2024-07-12 14:36:36.605658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.964 #24 NEW cov: 12118 ft: 13704 corp: 12/99b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:07:59.964 [2024-07-12 14:36:36.645674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00200000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.964 [2024-07-12 14:36:36.645699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.964 #25 NEW cov: 12118 ft: 13783 corp: 13/108b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:07:59.964 [2024-07-12 14:36:36.695865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00002d00 cdw11:002d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.964 [2024-07-12 14:36:36.695889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.964 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:59.964 #26 NEW cov: 12141 ft: 13823 corp: 14/117b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:07:59.964 [2024-07-12 14:36:36.746017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.964 [2024-07-12 14:36:36.746042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.223 #27 NEW cov: 12141 ft: 13855 corp: 15/126b lim: 40 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:00.223 [2024-07-12 14:36:36.786129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.223 [2024-07-12 14:36:36.786154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.223 #28 NEW cov: 12141 ft: 13959 corp: 16/135b lim: 40 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:08:00.223 [2024-07-12 14:36:36.836262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:00090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.223 [2024-07-12 14:36:36.836290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.223 #29 NEW cov: 12141 ft: 13999 corp: 17/144b lim: 40 exec/s: 29 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:00.223 [2024-07-12 14:36:36.886401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01082d00 cdw11:002d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.223 [2024-07-12 14:36:36.886426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.223 #30 NEW cov: 12141 ft: 14017 corp: 18/153b lim: 40 exec/s: 30 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:00.223 [2024-07-12 14:36:36.936548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000400 cdw11:003d0008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.223 [2024-07-12 14:36:36.936572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.223 #31 NEW cov: 12141 ft: 14032 corp: 19/164b lim: 40 exec/s: 31 rss: 73Mb L: 11/11 MS: 1 CMP- DE: "\004\000"- 00:08:00.223 [2024-07-12 14:36:36.987050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:000900ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.223 [2024-07-12 14:36:36.987074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.223 [2024-07-12 14:36:36.987152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.223 [2024-07-12 14:36:36.987167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.223 [2024-07-12 14:36:36.987226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.223 [2024-07-12 14:36:36.987239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.481 #32 NEW cov: 12141 ft: 14783 corp: 20/195b lim: 40 exec/s: 32 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:00.481 [2024-07-12 14:36:37.036827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.481 [2024-07-12 14:36:37.036853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.481 #33 NEW cov: 12141 ft: 14786 corp: 21/204b lim: 40 exec/s: 33 rss: 73Mb L: 9/31 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:00.481 [2024-07-12 14:36:37.076947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.481 [2024-07-12 14:36:37.076974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.481 #34 NEW cov: 12141 ft: 14796 corp: 22/213b lim: 40 exec/s: 34 rss: 73Mb L: 9/31 MS: 1 ChangeBit- 00:08:00.481 [2024-07-12 14:36:37.117079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.481 [2024-07-12 14:36:37.117105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.481 #40 NEW cov: 12141 ft: 14811 corp: 23/222b lim: 40 exec/s: 40 rss: 73Mb L: 9/31 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:00.481 [2024-07-12 14:36:37.157220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00002d00 cdw11:002d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.481 [2024-07-12 14:36:37.157246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.481 #41 NEW cov: 12141 ft: 14823 corp: 24/237b lim: 40 exec/s: 41 rss: 73Mb L: 15/31 MS: 1 CopyPart- 00:08:00.481 [2024-07-12 14:36:37.197308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:083d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.481 [2024-07-12 14:36:37.197333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.481 #42 NEW cov: 12141 ft: 14824 corp: 25/246b lim: 40 exec/s: 42 rss: 73Mb L: 9/31 MS: 1 ShuffleBytes- 00:08:00.481 [2024-07-12 14:36:37.237608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.481 [2024-07-12 14:36:37.237634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.481 [2024-07-12 14:36:37.237697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:14000000 cdw11:3d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.481 [2024-07-12 14:36:37.237713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.481 #43 NEW cov: 12141 ft: 15028 corp: 26/264b lim: 40 exec/s: 43 rss: 73Mb L: 18/31 MS: 1 CopyPart- 00:08:00.738 [2024-07-12 14:36:37.277589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000099 cdw11:3d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.738 [2024-07-12 14:36:37.277615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.738 #44 NEW cov: 12141 ft: 15078 corp: 27/274b lim: 40 exec/s: 44 rss: 73Mb L: 10/31 MS: 1 InsertByte- 00:08:00.738 [2024-07-12 14:36:37.317848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.738 [2024-07-12 14:36:37.317874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.738 [2024-07-12 14:36:37.317933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.738 [2024-07-12 14:36:37.317947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.738 #45 NEW cov: 12141 ft: 15082 corp: 28/291b lim: 40 exec/s: 45 rss: 73Mb L: 17/31 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:00.738 [2024-07-12 14:36:37.367863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00007e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.738 [2024-07-12 14:36:37.367888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.739 #46 NEW cov: 12141 ft: 15130 corp: 29/301b lim: 40 exec/s: 46 rss: 73Mb L: 10/31 MS: 1 InsertByte- 00:08:00.739 [2024-07-12 14:36:37.417997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00002d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.739 [2024-07-12 14:36:37.418023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.739 #48 NEW cov: 12141 ft: 15139 corp: 30/314b lim: 40 exec/s: 48 rss: 73Mb L: 13/31 MS: 2 EraseBytes-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:00.739 [2024-07-12 14:36:37.458102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00007e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.739 [2024-07-12 14:36:37.458128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.739 #49 NEW cov: 12141 ft: 15154 corp: 31/324b lim: 40 exec/s: 49 rss: 73Mb L: 10/31 MS: 1 ChangeBit- 00:08:00.739 [2024-07-12 14:36:37.508251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00007e80 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.739 [2024-07-12 14:36:37.508276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.996 #50 NEW cov: 12141 ft: 15182 corp: 32/334b lim: 40 exec/s: 50 rss: 73Mb L: 10/31 MS: 1 ChangeBit- 00:08:00.996 [2024-07-12 14:36:37.558395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00007e00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.996 [2024-07-12 14:36:37.558421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.996 #51 NEW cov: 12141 ft: 15229 corp: 33/344b lim: 40 exec/s: 51 rss: 73Mb L: 10/31 MS: 1 ChangeByte- 00:08:00.996 [2024-07-12 14:36:37.598695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.996 [2024-07-12 14:36:37.598720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.996 [2024-07-12 14:36:37.598793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.996 [2024-07-12 14:36:37.598807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.996 #52 NEW cov: 12141 ft: 15254 corp: 34/361b lim: 40 exec/s: 52 rss: 73Mb L: 17/31 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:00.997 [2024-07-12 14:36:37.648649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00002d00 cdw11:30000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.997 [2024-07-12 14:36:37.648675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.997 #53 NEW cov: 12141 ft: 15263 corp: 35/374b lim: 40 exec/s: 53 rss: 74Mb L: 13/31 MS: 1 ChangeByte- 00:08:00.997 [2024-07-12 14:36:37.698761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:0000003d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.997 [2024-07-12 14:36:37.698785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.997 #54 NEW cov: 12141 ft: 15274 corp: 36/387b lim: 40 exec/s: 54 rss: 74Mb L: 13/31 MS: 1 InsertRepeatedBytes- 00:08:00.997 [2024-07-12 14:36:37.738864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0000003d cdw11:00006500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.997 [2024-07-12 14:36:37.738888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.997 #55 NEW cov: 12141 ft: 15276 corp: 37/396b lim: 40 exec/s: 55 rss: 74Mb L: 9/31 MS: 1 ChangeByte- 00:08:00.997 [2024-07-12 14:36:37.778969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:00.997 [2024-07-12 14:36:37.778993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.256 #56 NEW cov: 12141 ft: 15279 corp: 38/405b lim: 40 exec/s: 56 rss: 74Mb L: 9/31 MS: 1 ShuffleBytes- 00:08:01.256 [2024-07-12 14:36:37.819242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.256 [2024-07-12 14:36:37.819267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.256 [2024-07-12 14:36:37.819342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.256 [2024-07-12 14:36:37.819356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.256 #57 NEW cov: 12141 ft: 15386 corp: 39/423b lim: 40 exec/s: 28 rss: 74Mb L: 18/31 MS: 1 CopyPart- 00:08:01.256 #57 DONE cov: 12141 ft: 15386 corp: 39/423b lim: 40 exec/s: 28 rss: 74Mb 00:08:01.256 ###### Recommended dictionary. ###### 00:08:01.256 "\000\000\000\000\000\000\000\000" # Uses: 5 00:08:01.256 "\004\000" # Uses: 0 00:08:01.256 ###### End of recommended dictionary. ###### 00:08:01.256 Done 57 runs in 2 second(s) 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.256 14:36:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:01.256 [2024-07-12 14:36:38.025403] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:01.256 [2024-07-12 14:36:38.025490] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427898 ] 00:08:01.514 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.514 [2024-07-12 14:36:38.241874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.772 [2024-07-12 14:36:38.315476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.772 [2024-07-12 14:36:38.374924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.772 [2024-07-12 14:36:38.391142] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:01.772 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.772 INFO: Seed: 2983427602 00:08:01.772 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:01.772 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:01.772 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:01.772 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.772 #2 INITED exec/s: 0 rss: 64Mb 00:08:01.772 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.772 This may also happen if the target rejected all inputs we tried so far 00:08:01.773 [2024-07-12 14:36:38.450225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.773 [2024-07-12 14:36:38.450256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.773 [2024-07-12 14:36:38.450312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.773 [2024-07-12 14:36:38.450327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.773 [2024-07-12 14:36:38.450379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.773 [2024-07-12 14:36:38.450392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.030 NEW_FUNC[1/696]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:02.030 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:02.030 #3 NEW cov: 11895 ft: 11895 corp: 2/32b lim: 40 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:02.030 [2024-07-12 14:36:38.791474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.030 [2024-07-12 14:36:38.791548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.030 [2024-07-12 14:36:38.791636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.030 [2024-07-12 14:36:38.791664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.031 [2024-07-12 14:36:38.791750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.031 [2024-07-12 14:36:38.791777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.288 #6 NEW cov: 12025 ft: 12638 corp: 3/59b lim: 40 exec/s: 0 rss: 72Mb L: 27/31 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:08:02.288 [2024-07-12 14:36:38.841206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.841233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.288 [2024-07-12 14:36:38.841290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.841303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.288 [2024-07-12 14:36:38.841354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.841367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.288 #7 NEW cov: 12031 ft: 12961 corp: 4/86b lim: 40 exec/s: 0 rss: 72Mb L: 27/31 MS: 1 ChangeBit- 00:08:02.288 [2024-07-12 14:36:38.891208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.891235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.288 [2024-07-12 14:36:38.891290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.891304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.288 #8 NEW cov: 12116 ft: 13429 corp: 5/107b lim: 40 exec/s: 0 rss: 72Mb L: 21/31 MS: 1 EraseBytes- 00:08:02.288 [2024-07-12 14:36:38.941316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.941340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.288 [2024-07-12 14:36:38.941394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.941408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.288 #12 NEW cov: 12116 ft: 13588 corp: 6/128b lim: 40 exec/s: 0 rss: 72Mb L: 21/31 MS: 4 InsertByte-ShuffleBytes-InsertByte-CrossOver- 00:08:02.288 [2024-07-12 14:36:38.981422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.981446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.288 [2024-07-12 14:36:38.981504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d3d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:38.981517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.288 #13 NEW cov: 12116 ft: 13655 corp: 7/150b lim: 40 exec/s: 0 rss: 72Mb L: 22/31 MS: 1 InsertByte- 00:08:02.288 [2024-07-12 14:36:39.031573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:39.031597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.288 [2024-07-12 14:36:39.031653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d2e0d0d cdw11:3d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.288 [2024-07-12 14:36:39.031667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.288 #14 NEW cov: 12116 ft: 13748 corp: 8/173b lim: 40 exec/s: 0 rss: 72Mb L: 23/31 MS: 1 InsertByte- 00:08:02.547 [2024-07-12 14:36:39.081860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:9d0affff cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.547 [2024-07-12 14:36:39.081885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.547 [2024-07-12 14:36:39.081940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.547 [2024-07-12 14:36:39.081955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.547 [2024-07-12 14:36:39.082024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.547 [2024-07-12 14:36:39.082038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.547 #16 NEW cov: 12116 ft: 13857 corp: 9/199b lim: 40 exec/s: 0 rss: 72Mb L: 26/31 MS: 2 InsertByte-CrossOver- 00:08:02.547 [2024-07-12 14:36:39.121977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.547 [2024-07-12 14:36:39.122001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.122056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.122070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.122124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.122137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.548 #17 NEW cov: 12116 ft: 13878 corp: 10/230b lim: 40 exec/s: 0 rss: 72Mb L: 31/31 MS: 1 ShuffleBytes- 00:08:02.548 [2024-07-12 14:36:39.162125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.162150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.162205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.162219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.162273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.162286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.548 #18 NEW cov: 12116 ft: 13914 corp: 11/257b lim: 40 exec/s: 0 rss: 72Mb L: 27/31 MS: 1 ShuffleBytes- 00:08:02.548 [2024-07-12 14:36:39.202351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.202375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.202428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.202441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.202495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.202509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.202539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.202550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.548 #19 NEW cov: 12116 ft: 14230 corp: 12/294b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:02.548 [2024-07-12 14:36:39.252191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.252215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.252273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d2c cdw11:3d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.252287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.548 #20 NEW cov: 12116 ft: 14255 corp: 13/317b lim: 40 exec/s: 0 rss: 72Mb L: 23/37 MS: 1 InsertByte- 00:08:02.548 [2024-07-12 14:36:39.292347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff770d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.292371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.292427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:3d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.292441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.548 #21 NEW cov: 12116 ft: 14355 corp: 14/340b lim: 40 exec/s: 0 rss: 72Mb L: 23/37 MS: 1 InsertByte- 00:08:02.548 [2024-07-12 14:36:39.332685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0df3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.332710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.332782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f2ef0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.332797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.548 [2024-07-12 14:36:39.332852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.548 [2024-07-12 14:36:39.332866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.807 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:02.807 #22 NEW cov: 12139 ft: 14421 corp: 15/367b lim: 40 exec/s: 0 rss: 73Mb L: 27/37 MS: 1 ChangeBinInt- 00:08:02.807 [2024-07-12 14:36:39.382902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.382927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.382981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.382995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.383048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0dff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.383062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.383115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.383128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.807 #23 NEW cov: 12139 ft: 14457 corp: 16/402b lim: 40 exec/s: 0 rss: 73Mb L: 35/37 MS: 1 CrossOver- 00:08:02.807 [2024-07-12 14:36:39.423005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.423032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.423103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.423117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.423175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.423188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.423243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:96969696 cdw11:9696960d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.423256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.807 #24 NEW cov: 12139 ft: 14485 corp: 17/436b lim: 40 exec/s: 24 rss: 73Mb L: 34/37 MS: 1 InsertRepeatedBytes- 00:08:02.807 [2024-07-12 14:36:39.462964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff770d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.462989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.463045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:3d0d0d7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.463058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.463115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7affff77 cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.463128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.807 #25 NEW cov: 12139 ft: 14502 corp: 18/466b lim: 40 exec/s: 25 rss: 73Mb L: 30/37 MS: 1 CopyPart- 00:08:02.807 [2024-07-12 14:36:39.513146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.513170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.513225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.513239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.513295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.513308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.807 #26 NEW cov: 12139 ft: 14525 corp: 19/497b lim: 40 exec/s: 26 rss: 73Mb L: 31/37 MS: 1 ShuffleBytes- 00:08:02.807 [2024-07-12 14:36:39.553213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.553238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.553297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d3d cdw11:0d0d7e7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.553311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.807 [2024-07-12 14:36:39.553366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff770d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.807 [2024-07-12 14:36:39.553380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.807 #27 NEW cov: 12139 ft: 14561 corp: 20/527b lim: 40 exec/s: 27 rss: 73Mb L: 30/37 MS: 1 CopyPart- 00:08:03.065 [2024-07-12 14:36:39.603375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.603401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.065 [2024-07-12 14:36:39.603455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.603469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.065 [2024-07-12 14:36:39.603547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000f3d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.603562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.065 #28 NEW cov: 12139 ft: 14575 corp: 21/557b lim: 40 exec/s: 28 rss: 73Mb L: 30/37 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\017"- 00:08:03.065 [2024-07-12 14:36:39.643487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff770d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.643511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.065 [2024-07-12 14:36:39.643569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:3d0d0d7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.643583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.065 [2024-07-12 14:36:39.643638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d77 cdw11:7affff0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.643651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.065 #29 NEW cov: 12139 ft: 14578 corp: 22/587b lim: 40 exec/s: 29 rss: 73Mb L: 30/37 MS: 1 ShuffleBytes- 00:08:03.065 [2024-07-12 14:36:39.683647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.683673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.065 [2024-07-12 14:36:39.683734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.065 [2024-07-12 14:36:39.683748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.065 [2024-07-12 14:36:39.683803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.683817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.066 #30 NEW cov: 12139 ft: 14585 corp: 23/614b lim: 40 exec/s: 30 rss: 73Mb L: 27/37 MS: 1 ShuffleBytes- 00:08:03.066 [2024-07-12 14:36:39.723742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.723767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.066 [2024-07-12 14:36:39.723824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.723837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.066 [2024-07-12 14:36:39.723893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d13 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.723906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.066 #31 NEW cov: 12139 ft: 14605 corp: 24/641b lim: 40 exec/s: 31 rss: 73Mb L: 27/37 MS: 1 ChangeBinInt- 00:08:03.066 [2024-07-12 14:36:39.763833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.763860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.066 [2024-07-12 14:36:39.763920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.763934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.066 [2024-07-12 14:36:39.763990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0dffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.764004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.066 #32 NEW cov: 12139 ft: 14661 corp: 25/668b lim: 40 exec/s: 32 rss: 73Mb L: 27/37 MS: 1 CrossOver- 00:08:03.066 [2024-07-12 14:36:39.803990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff770d7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.804016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.066 [2024-07-12 14:36:39.804072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:3d0d0d7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.804085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.066 [2024-07-12 14:36:39.804140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d77 cdw11:7affff0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.066 [2024-07-12 14:36:39.804153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.066 #33 NEW cov: 12139 ft: 14669 corp: 26/698b lim: 40 exec/s: 33 rss: 73Mb L: 30/37 MS: 1 ChangeByte- 00:08:03.324 [2024-07-12 14:36:39.854007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.854034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:39.854092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.854111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.324 #34 NEW cov: 12139 ft: 14733 corp: 27/719b lim: 40 exec/s: 34 rss: 73Mb L: 21/37 MS: 1 ChangeByte- 00:08:03.324 [2024-07-12 14:36:39.904406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.904431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:39.904487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:7f0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.904501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:39.904555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d3d0d0d cdw11:7e0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.904569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:39.904625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:96969696 cdw11:9696960d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.904638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.324 #35 NEW cov: 12139 ft: 14767 corp: 28/753b lim: 40 exec/s: 35 rss: 73Mb L: 34/37 MS: 1 CrossOver- 00:08:03.324 [2024-07-12 14:36:39.954564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.954590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:39.954664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.954678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:39.954735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.954749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:39.954806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:39.954820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.324 #36 NEW cov: 12139 ft: 14805 corp: 29/791b lim: 40 exec/s: 36 rss: 73Mb L: 38/38 MS: 1 InsertByte- 00:08:03.324 [2024-07-12 14:36:40.004526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0df308 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.004556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:40.004610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f2ef0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.004625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:40.004681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.004699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.324 #37 NEW cov: 12139 ft: 14818 corp: 30/818b lim: 40 exec/s: 37 rss: 73Mb L: 27/38 MS: 1 ChangeBinInt- 00:08:03.324 [2024-07-12 14:36:40.054744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.054778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:40.054837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0daf cdw11:2c3d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.054853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:40.054909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.054925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.324 #38 NEW cov: 12139 ft: 14828 corp: 31/842b lim: 40 exec/s: 38 rss: 73Mb L: 24/38 MS: 1 InsertByte- 00:08:03.324 [2024-07-12 14:36:40.105044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.105078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:40.105153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0f770d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.105170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:40.105228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:3d0d0d7e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.105244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.324 [2024-07-12 14:36:40.105302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0d0d0d77 cdw11:7affff0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.324 [2024-07-12 14:36:40.105318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.582 #39 NEW cov: 12139 ft: 14861 corp: 32/880b lim: 40 exec/s: 39 rss: 73Mb L: 38/38 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\017"- 00:08:03.582 [2024-07-12 14:36:40.145109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0df3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.145134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.145207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:f2f2f2f2 cdw11:f2f24b4b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.145221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.145278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.145292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.145348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:4bef0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.145365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.582 #40 NEW cov: 12139 ft: 14876 corp: 33/918b lim: 40 exec/s: 40 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:03.582 [2024-07-12 14:36:40.185200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.185225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.185281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2effffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.185294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.185351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.185364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.185432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.185446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.582 #41 NEW cov: 12139 ft: 14905 corp: 34/956b lim: 40 exec/s: 41 rss: 73Mb L: 38/38 MS: 1 ChangeBit- 00:08:03.582 [2024-07-12 14:36:40.235185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff0dff0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.235213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.235284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.235298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.235353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.235367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.582 #42 NEW cov: 12139 ft: 14908 corp: 35/984b lim: 40 exec/s: 42 rss: 73Mb L: 28/38 MS: 1 InsertByte- 00:08:03.582 [2024-07-12 14:36:40.275225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.275250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.275322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d000000 cdw11:000d0d2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.275336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.275391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:3d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.275405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.582 #43 NEW cov: 12139 ft: 14981 corp: 36/1011b lim: 40 exec/s: 43 rss: 73Mb L: 27/38 MS: 1 InsertRepeatedBytes- 00:08:03.582 [2024-07-12 14:36:40.315257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.315282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.582 [2024-07-12 14:36:40.315340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.315354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.582 #44 NEW cov: 12139 ft: 14999 corp: 37/1032b lim: 40 exec/s: 44 rss: 73Mb L: 21/38 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\017"- 00:08:03.582 [2024-07-12 14:36:40.355566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffff0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.582 [2024-07-12 14:36:40.355591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.583 [2024-07-12 14:36:40.355646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.583 [2024-07-12 14:36:40.355659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.583 [2024-07-12 14:36:40.355728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0d0d0d0d cdw11:0d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.583 [2024-07-12 14:36:40.355742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.840 #45 NEW cov: 12139 ft: 15007 corp: 38/1059b lim: 40 exec/s: 45 rss: 73Mb L: 27/38 MS: 1 ShuffleBytes- 00:08:03.840 [2024-07-12 14:36:40.405534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a7e7aff cdw11:ff0d0d0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.840 [2024-07-12 14:36:40.405558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.840 [2024-07-12 14:36:40.405615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0d0d0d2c cdw11:3d0d0d0d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.840 [2024-07-12 14:36:40.405629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.840 #46 NEW cov: 12139 ft: 15011 corp: 39/1082b lim: 40 exec/s: 46 rss: 73Mb L: 23/38 MS: 1 ChangeByte- 00:08:03.840 [2024-07-12 14:36:40.445966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.840 [2024-07-12 14:36:40.445991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.840 [2024-07-12 14:36:40.446065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.840 [2024-07-12 14:36:40.446079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.840 [2024-07-12 14:36:40.446135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.840 [2024-07-12 14:36:40.446149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.840 [2024-07-12 14:36:40.446205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.840 [2024-07-12 14:36:40.446218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.840 #47 NEW cov: 12139 ft: 15018 corp: 40/1121b lim: 40 exec/s: 23 rss: 73Mb L: 39/39 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\017"- 00:08:03.840 #47 DONE cov: 12139 ft: 15018 corp: 40/1121b lim: 40 exec/s: 23 rss: 73Mb 00:08:03.840 ###### Recommended dictionary. ###### 00:08:03.840 "\000\000\000\000\000\000\000\017" # Uses: 3 00:08:03.840 ###### End of recommended dictionary. ###### 00:08:03.840 Done 47 runs in 2 second(s) 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.840 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:03.841 14:36:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:04.098 [2024-07-12 14:36:40.652695] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:04.098 [2024-07-12 14:36:40.652766] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428254 ] 00:08:04.098 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.098 [2024-07-12 14:36:40.867272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.357 [2024-07-12 14:36:40.940809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.357 [2024-07-12 14:36:41.000348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.357 [2024-07-12 14:36:41.016556] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:04.357 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.357 INFO: Seed: 1315459105 00:08:04.357 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:04.357 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:04.357 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:04.357 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.357 #2 INITED exec/s: 0 rss: 65Mb 00:08:04.357 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.357 This may also happen if the target rejected all inputs we tried so far 00:08:04.357 [2024-07-12 14:36:41.082145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.357 [2024-07-12 14:36:41.082173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.357 [2024-07-12 14:36:41.082230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.357 [2024-07-12 14:36:41.082244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.357 [2024-07-12 14:36:41.082300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.357 [2024-07-12 14:36:41.082314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.924 NEW_FUNC[1/695]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:04.924 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:04.924 #21 NEW cov: 11883 ft: 11865 corp: 2/30b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 4 InsertByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:08:04.924 [2024-07-12 14:36:41.423169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:01007f60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.423230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.423322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:88002255 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.423349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.423428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.423454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.924 #22 NEW cov: 12013 ft: 12545 corp: 3/59b lim: 40 exec/s: 0 rss: 72Mb L: 29/29 MS: 1 CMP- DE: "\001\000\177`\210\000\"U"- 00:08:04.924 [2024-07-12 14:36:41.483276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.483302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.483373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.483387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.483440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.483454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.483507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.483523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.483583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.483596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:04.924 #23 NEW cov: 12019 ft: 13205 corp: 4/99b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:08:04.924 [2024-07-12 14:36:41.523270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.523295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.523350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.523364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.523416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.523429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.523479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.523492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.924 #24 NEW cov: 12104 ft: 13588 corp: 5/136b lim: 40 exec/s: 0 rss: 72Mb L: 37/40 MS: 1 PersAutoDict- DE: "\001\000\177`\210\000\"U"- 00:08:04.924 [2024-07-12 14:36:41.563227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.563251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.563305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.563318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.563370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.563383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.924 #25 NEW cov: 12104 ft: 13671 corp: 6/167b lim: 40 exec/s: 0 rss: 72Mb L: 31/40 MS: 1 CrossOver- 00:08:04.924 [2024-07-12 14:36:41.603500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.603524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.603585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.603598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.603653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.603669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.603724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.603737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.924 #26 NEW cov: 12104 ft: 13736 corp: 7/204b lim: 40 exec/s: 0 rss: 72Mb L: 37/40 MS: 1 ShuffleBytes- 00:08:04.924 [2024-07-12 14:36:41.653476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:01007f60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.653500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.653560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:88002255 cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.653574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.924 [2024-07-12 14:36:41.653630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.653642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.924 #27 NEW cov: 12104 ft: 13786 corp: 8/233b lim: 40 exec/s: 0 rss: 72Mb L: 29/40 MS: 1 ChangeBinInt- 00:08:04.924 [2024-07-12 14:36:41.703390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a01007f cdw11:60880022 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:04.924 [2024-07-12 14:36:41.703415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.184 #28 NEW cov: 12104 ft: 14175 corp: 9/242b lim: 40 exec/s: 0 rss: 72Mb L: 9/40 MS: 1 PersAutoDict- DE: "\001\000\177`\210\000\"U"- 00:08:05.184 [2024-07-12 14:36:41.743891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.743917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.743972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.743985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.744037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.744050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.744103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.744116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.184 #34 NEW cov: 12104 ft: 14214 corp: 10/279b lim: 40 exec/s: 0 rss: 72Mb L: 37/40 MS: 1 CopyPart- 00:08:05.184 [2024-07-12 14:36:41.794006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:01007f60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.794031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.794089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:88002219 cdw11:19191919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.794103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.794156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1955fdff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.794169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.794222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.794235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.184 #35 NEW cov: 12104 ft: 14277 corp: 11/314b lim: 40 exec/s: 0 rss: 72Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:08:05.184 [2024-07-12 14:36:41.844193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.844217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.844290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.844304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.844357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.844371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.844427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.844441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.184 #36 NEW cov: 12104 ft: 14333 corp: 12/352b lim: 40 exec/s: 0 rss: 73Mb L: 38/40 MS: 1 CopyPart- 00:08:05.184 [2024-07-12 14:36:41.894304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.894328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.894382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.894395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.894451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.894464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.894518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:002a0000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.894535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.184 #37 NEW cov: 12104 ft: 14359 corp: 13/390b lim: 40 exec/s: 0 rss: 73Mb L: 38/40 MS: 1 InsertByte- 00:08:05.184 [2024-07-12 14:36:41.934190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.934214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.184 [2024-07-12 14:36:41.934270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:005d0aae cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.184 [2024-07-12 14:36:41.934284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.184 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:05.184 #38 NEW cov: 12127 ft: 14662 corp: 14/412b lim: 40 exec/s: 0 rss: 73Mb L: 22/40 MS: 1 CrossOver- 00:08:05.443 [2024-07-12 14:36:41.974593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.443 [2024-07-12 14:36:41.974618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.443 [2024-07-12 14:36:41.974672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.443 [2024-07-12 14:36:41.974685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.443 [2024-07-12 14:36:41.974738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.443 [2024-07-12 14:36:41.974752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.443 [2024-07-12 14:36:41.974805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:002a0000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.443 [2024-07-12 14:36:41.974818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.443 #39 NEW cov: 12127 ft: 14692 corp: 15/450b lim: 40 exec/s: 0 rss: 73Mb L: 38/40 MS: 1 ChangeBinInt- 00:08:05.443 [2024-07-12 14:36:42.024399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a5d0a cdw11:ae0001ae SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.443 [2024-07-12 14:36:42.024423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.443 [2024-07-12 14:36:42.024476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0001007f cdw11:60008800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.443 [2024-07-12 14:36:42.024490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.444 #40 NEW cov: 12127 ft: 14753 corp: 16/469b lim: 40 exec/s: 0 rss: 73Mb L: 19/40 MS: 1 CrossOver- 00:08:05.444 [2024-07-12 14:36:42.074790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5d0aae cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.074814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.074869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.074882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.074937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.074951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.075002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.075014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.444 #41 NEW cov: 12127 ft: 14765 corp: 17/508b lim: 40 exec/s: 41 rss: 73Mb L: 39/40 MS: 1 CrossOver- 00:08:05.444 [2024-07-12 14:36:42.114875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.114899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.114951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.114964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.115016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.115030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.115082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:000a0100 cdw11:7f608800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.115095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.444 #42 NEW cov: 12127 ft: 14772 corp: 18/545b lim: 40 exec/s: 42 rss: 73Mb L: 37/40 MS: 1 CrossOver- 00:08:05.444 [2024-07-12 14:36:42.154879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff0aae00 cdw11:01007f60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.154902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.154957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:88002255 cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.154971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.155023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.155036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.444 #43 NEW cov: 12127 ft: 14792 corp: 19/574b lim: 40 exec/s: 43 rss: 73Mb L: 29/40 MS: 1 ChangeByte- 00:08:05.444 [2024-07-12 14:36:42.195003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.195027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.195079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.195092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.444 [2024-07-12 14:36:42.195147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.444 [2024-07-12 14:36:42.195160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.444 #44 NEW cov: 12127 ft: 14800 corp: 20/605b lim: 40 exec/s: 44 rss: 73Mb L: 31/40 MS: 1 ChangeByte- 00:08:05.703 [2024-07-12 14:36:42.235285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:69000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.235311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.235366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.235379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.235433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.235447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.235500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00002a00 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.235514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.703 #45 NEW cov: 12127 ft: 14838 corp: 21/644b lim: 40 exec/s: 45 rss: 73Mb L: 39/40 MS: 1 InsertByte- 00:08:05.703 [2024-07-12 14:36:42.285281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.285307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.285359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.285372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.285424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f000000 cdw11:00002f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.285438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.703 #46 NEW cov: 12127 ft: 14846 corp: 22/675b lim: 40 exec/s: 46 rss: 73Mb L: 31/40 MS: 1 ChangeBinInt- 00:08:05.703 [2024-07-12 14:36:42.335503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d7e000a cdw11:ae000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.335532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.335588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:7f608800 cdw11:22191919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.335601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.335657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:19191955 cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.335673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.335724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.335737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.703 #47 NEW cov: 12127 ft: 14871 corp: 23/712b lim: 40 exec/s: 47 rss: 73Mb L: 37/40 MS: 1 CMP- DE: "~\000"- 00:08:05.703 [2024-07-12 14:36:42.375633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:01007f60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.375658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.375712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:8800227e cdw11:00191919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.375725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.375778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:19191955 cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.375791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.375844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.375857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.703 #48 NEW cov: 12127 ft: 14909 corp: 24/749b lim: 40 exec/s: 48 rss: 73Mb L: 37/40 MS: 1 PersAutoDict- DE: "~\000"- 00:08:05.703 [2024-07-12 14:36:42.415624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a8800 cdw11:227e0019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.415648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.415706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:19191919 cdw11:1955fdff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.415719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.415771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.415785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.703 #49 NEW cov: 12127 ft: 14915 corp: 25/780b lim: 40 exec/s: 49 rss: 73Mb L: 31/40 MS: 1 EraseBytes- 00:08:05.703 [2024-07-12 14:36:42.465874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a00f7 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.465900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.465953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.465967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.466021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.466038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.703 [2024-07-12 14:36:42.466089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.703 [2024-07-12 14:36:42.466102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.961 #50 NEW cov: 12127 ft: 14996 corp: 26/817b lim: 40 exec/s: 50 rss: 73Mb L: 37/40 MS: 1 ChangeBinInt- 00:08:05.961 [2024-07-12 14:36:42.515889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.515913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.515963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.515977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.516030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:10000000 cdw11:00002f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.516045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.962 #51 NEW cov: 12127 ft: 15021 corp: 27/848b lim: 40 exec/s: 51 rss: 73Mb L: 31/40 MS: 1 ChangeBit- 00:08:05.962 [2024-07-12 14:36:42.556035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff0aae00 cdw11:01007f60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.556061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.556115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:88002255 cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.556128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.556182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.556196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.962 #52 NEW cov: 12127 ft: 15022 corp: 28/877b lim: 40 exec/s: 52 rss: 73Mb L: 29/40 MS: 1 ChangeBit- 00:08:05.962 [2024-07-12 14:36:42.606300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a52ff cdw11:feff809f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.606324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.606380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:77f8227e cdw11:00191919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.606395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.606449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:19191955 cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.606462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.606517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.606537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.962 #53 NEW cov: 12127 ft: 15026 corp: 29/914b lim: 40 exec/s: 53 rss: 73Mb L: 37/40 MS: 1 ChangeBinInt- 00:08:05.962 [2024-07-12 14:36:42.646410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.646436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.646489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.646503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.646560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.646574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.646630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:002a0000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.646643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.962 #54 NEW cov: 12127 ft: 15040 corp: 30/952b lim: 40 exec/s: 54 rss: 73Mb L: 38/40 MS: 1 ChangeByte- 00:08:05.962 [2024-07-12 14:36:42.686472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a52ff cdw11:feff809f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.686497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.686553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:77f8227e cdw11:00191919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.686567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.686617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:55fdffff cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.686631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.686685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.686698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.962 #55 NEW cov: 12127 ft: 15054 corp: 31/989b lim: 40 exec/s: 55 rss: 74Mb L: 37/40 MS: 1 CopyPart- 00:08:05.962 [2024-07-12 14:36:42.736621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.736647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.736703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.736717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.736771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.736784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.962 [2024-07-12 14:36:42.736836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.962 [2024-07-12 14:36:42.736849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.221 #56 NEW cov: 12127 ft: 15093 corp: 32/1026b lim: 40 exec/s: 56 rss: 74Mb L: 37/40 MS: 1 ChangeBit- 00:08:06.221 [2024-07-12 14:36:42.776775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a00f7 cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.776801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.776872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.776886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.776940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.776955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.777008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00010000 cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.777022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.221 #57 NEW cov: 12127 ft: 15096 corp: 33/1063b lim: 40 exec/s: 57 rss: 74Mb L: 37/40 MS: 1 CopyPart- 00:08:06.221 [2024-07-12 14:36:42.826911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.826936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.826991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.827004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.827059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.827072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.827127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.827140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.221 #60 NEW cov: 12127 ft: 15103 corp: 34/1100b lim: 40 exec/s: 60 rss: 74Mb L: 37/40 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:08:06.221 [2024-07-12 14:36:42.867041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.867068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.867122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.867136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.867188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:10000000 cdw11:00002f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.867201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.867253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.867266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.221 #61 NEW cov: 12127 ft: 15116 corp: 35/1135b lim: 40 exec/s: 61 rss: 74Mb L: 35/40 MS: 1 CopyPart- 00:08:06.221 [2024-07-12 14:36:42.917171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:08000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.917194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.917251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:7f608800 cdw11:22191919 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.917265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.917317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:19191955 cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.917330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.917384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.917397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.221 #62 NEW cov: 12127 ft: 15120 corp: 36/1172b lim: 40 exec/s: 62 rss: 74Mb L: 37/40 MS: 1 CMP- DE: "\000\010"- 00:08:06.221 [2024-07-12 14:36:42.957044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0aae00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.957067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.957118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.957132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.221 #63 NEW cov: 12127 ft: 15142 corp: 37/1194b lim: 40 exec/s: 63 rss: 74Mb L: 22/40 MS: 1 EraseBytes- 00:08:06.221 [2024-07-12 14:36:42.997347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.997371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.997427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.997444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.997499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.997512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.221 [2024-07-12 14:36:42.997571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0001007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.221 [2024-07-12 14:36:42.997584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.480 #64 NEW cov: 12127 ft: 15184 corp: 38/1231b lim: 40 exec/s: 64 rss: 74Mb L: 37/40 MS: 1 ShuffleBytes- 00:08:06.480 [2024-07-12 14:36:43.037470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.480 [2024-07-12 14:36:43.037494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.480 [2024-07-12 14:36:43.037552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.480 [2024-07-12 14:36:43.037565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.480 [2024-07-12 14:36:43.037634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fff7ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.480 [2024-07-12 14:36:43.037648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.480 [2024-07-12 14:36:43.037704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:06.480 [2024-07-12 14:36:43.037717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.480 #65 NEW cov: 12127 ft: 15200 corp: 39/1268b lim: 40 exec/s: 32 rss: 74Mb L: 37/40 MS: 1 ChangeBit- 00:08:06.480 #65 DONE cov: 12127 ft: 15200 corp: 39/1268b lim: 40 exec/s: 32 rss: 74Mb 00:08:06.480 ###### Recommended dictionary. ###### 00:08:06.480 "\001\000\177`\210\000\"U" # Uses: 2 00:08:06.480 "~\000" # Uses: 1 00:08:06.480 "\000\010" # Uses: 0 00:08:06.480 ###### End of recommended dictionary. ###### 00:08:06.480 Done 65 runs in 2 second(s) 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:06.480 14:36:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:06.480 [2024-07-12 14:36:43.255198] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:06.480 [2024-07-12 14:36:43.255271] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428613 ] 00:08:06.738 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.738 [2024-07-12 14:36:43.465914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.997 [2024-07-12 14:36:43.539037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.997 [2024-07-12 14:36:43.598385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.997 [2024-07-12 14:36:43.614582] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:06.997 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.997 INFO: Seed: 3910470663 00:08:06.997 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:06.997 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:06.997 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:06.997 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.997 #2 INITED exec/s: 0 rss: 64Mb 00:08:06.997 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.997 This may also happen if the target rejected all inputs we tried so far 00:08:06.997 [2024-07-12 14:36:43.685103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.997 [2024-07-12 14:36:43.685143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.997 [2024-07-12 14:36:43.685251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.997 [2024-07-12 14:36:43.685270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.997 [2024-07-12 14:36:43.685367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.997 [2024-07-12 14:36:43.685387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.997 [2024-07-12 14:36:43.685488] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.997 [2024-07-12 14:36:43.685505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.997 [2024-07-12 14:36:43.685620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.997 [2024-07-12 14:36:43.685640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.280 NEW_FUNC[1/696]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:07.280 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.280 #4 NEW cov: 11877 ft: 11870 corp: 2/36b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:07.280 [2024-07-12 14:36:44.046567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.280 [2024-07-12 14:36:44.046614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.280 [2024-07-12 14:36:44.046701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.280 [2024-07-12 14:36:44.046722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.280 [2024-07-12 14:36:44.046805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.280 [2024-07-12 14:36:44.046826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.280 [2024-07-12 14:36:44.046913] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.280 [2024-07-12 14:36:44.046931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.280 [2024-07-12 14:36:44.047017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.280 [2024-07-12 14:36:44.047038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.561 #25 NEW cov: 12007 ft: 12390 corp: 3/71b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeBit- 00:08:07.561 [2024-07-12 14:36:44.114965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.114999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.115100] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.115118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.561 #26 NEW cov: 12013 ft: 13178 corp: 4/91b lim: 35 exec/s: 0 rss: 72Mb L: 20/35 MS: 1 EraseBytes- 00:08:07.561 [2024-07-12 14:36:44.166134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.166162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.166260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.166278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.166365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.166384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.166476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.166499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.166594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.166613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.561 #37 NEW cov: 12098 ft: 13502 corp: 5/126b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:08:07.561 [2024-07-12 14:36:44.226268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.226296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.226392] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.226412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.226507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.226525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.226618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ARBITRATION cid:7 cdw10:80000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.226636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.226721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.226739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.561 NEW_FUNC[1/1]: 0x4b28e0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:07.561 #38 NEW cov: 12132 ft: 13607 corp: 6/161b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:07.561 [2024-07-12 14:36:44.276774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.276802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.276886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.561 [2024-07-12 14:36:44.276905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.561 [2024-07-12 14:36:44.276985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.562 [2024-07-12 14:36:44.277004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.562 [2024-07-12 14:36:44.277083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.562 [2024-07-12 14:36:44.277100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.562 #44 NEW cov: 12132 ft: 13796 corp: 7/190b lim: 35 exec/s: 0 rss: 72Mb L: 29/35 MS: 1 EraseBytes- 00:08:07.562 [2024-07-12 14:36:44.326101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.562 [2024-07-12 14:36:44.326132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.562 [2024-07-12 14:36:44.326220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.562 [2024-07-12 14:36:44.326243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.820 NEW_FUNC[1/2]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:07.820 NEW_FUNC[2/2]: 0x11f0900 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:08:07.820 #45 NEW cov: 12165 ft: 14007 corp: 8/216b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 1 CrossOver- 00:08:07.820 [2024-07-12 14:36:44.385835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.385863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.385952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.385971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.820 #46 NEW cov: 12165 ft: 14054 corp: 9/236b lim: 35 exec/s: 0 rss: 72Mb L: 20/35 MS: 1 ChangeBinInt- 00:08:07.820 [2024-07-12 14:36:44.446419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.446446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.446542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.446573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.446662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.446680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.820 #49 NEW cov: 12165 ft: 14191 corp: 10/262b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 3 CrossOver-InsertByte-CrossOver- 00:08:07.820 [2024-07-12 14:36:44.497130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.497160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.497248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.497268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.497357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.497373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.497465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.497485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.497579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.497600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:07.820 #50 NEW cov: 12165 ft: 14224 corp: 11/297b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:08:07.820 [2024-07-12 14:36:44.556852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.556879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.556977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.556995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.820 [2024-07-12 14:36:44.557084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.820 [2024-07-12 14:36:44.557102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.820 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:07.820 #51 NEW cov: 12188 ft: 14267 corp: 12/323b lim: 35 exec/s: 0 rss: 73Mb L: 26/35 MS: 1 CopyPart- 00:08:08.080 [2024-07-12 14:36:44.617707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.617735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.617840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.617861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.617957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.617978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.618076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.618096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.618190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.618208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.080 #52 NEW cov: 12188 ft: 14296 corp: 13/358b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:08.080 [2024-07-12 14:36:44.676827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.676856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.676949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.676970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.080 #53 NEW cov: 12188 ft: 14323 corp: 14/378b lim: 35 exec/s: 53 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:08:08.080 [2024-07-12 14:36:44.737560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.737588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.737687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.737706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.080 #54 NEW cov: 12188 ft: 14347 corp: 15/399b lim: 35 exec/s: 54 rss: 73Mb L: 21/35 MS: 1 EraseBytes- 00:08:08.080 [2024-07-12 14:36:44.798420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.798449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.798544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.798562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.798654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.798672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.798760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.798776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.798862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.798878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.080 #55 NEW cov: 12188 ft: 14356 corp: 16/434b lim: 35 exec/s: 55 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:08:08.080 [2024-07-12 14:36:44.848571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.848599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.848694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.848711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.848806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.848826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.848913] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.848931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.080 [2024-07-12 14:36:44.849021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.080 [2024-07-12 14:36:44.849041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.339 #56 NEW cov: 12188 ft: 14360 corp: 17/469b lim: 35 exec/s: 56 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:08:08.339 [2024-07-12 14:36:44.898792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.898820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.898921] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.898939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.899030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.899049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.899140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.899158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.899250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.899271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.339 #57 NEW cov: 12188 ft: 14405 corp: 18/504b lim: 35 exec/s: 57 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:08.339 [2024-07-12 14:36:44.968900] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.968931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.969038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.969058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.969148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.969167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.969264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.969284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:44.969379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:44.969398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.339 #63 NEW cov: 12188 ft: 14444 corp: 19/539b lim: 35 exec/s: 63 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:08:08.339 [2024-07-12 14:36:45.039160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:45.039192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:45.039292] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:45.039314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.339 [2024-07-12 14:36:45.039404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.339 [2024-07-12 14:36:45.039423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.340 [2024-07-12 14:36:45.039517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.340 [2024-07-12 14:36:45.039542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.340 [2024-07-12 14:36:45.039640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.340 [2024-07-12 14:36:45.039657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.340 #64 NEW cov: 12188 ft: 14492 corp: 20/574b lim: 35 exec/s: 64 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:08.340 [2024-07-12 14:36:45.088284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.340 [2024-07-12 14:36:45.088313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.340 [2024-07-12 14:36:45.088417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.340 [2024-07-12 14:36:45.088440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.340 #65 NEW cov: 12188 ft: 14557 corp: 21/594b lim: 35 exec/s: 65 rss: 73Mb L: 20/35 MS: 1 CopyPart- 00:08:08.598 [2024-07-12 14:36:45.148515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.598 [2024-07-12 14:36:45.148552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.598 [2024-07-12 14:36:45.148655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.598 [2024-07-12 14:36:45.148675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.598 #66 NEW cov: 12188 ft: 14576 corp: 22/611b lim: 35 exec/s: 66 rss: 73Mb L: 17/35 MS: 1 EraseBytes- 00:08:08.598 [2024-07-12 14:36:45.219800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.598 [2024-07-12 14:36:45.219829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.598 [2024-07-12 14:36:45.219938] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.598 [2024-07-12 14:36:45.219959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.598 [2024-07-12 14:36:45.220047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.598 [2024-07-12 14:36:45.220066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.220170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.220192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.220287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.220308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.599 #67 NEW cov: 12188 ft: 14609 corp: 23/646b lim: 35 exec/s: 67 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:08.599 [2024-07-12 14:36:45.289349] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.289378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.289480] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.289500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.289593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.289614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.599 #68 NEW cov: 12188 ft: 14642 corp: 24/667b lim: 35 exec/s: 68 rss: 73Mb L: 21/35 MS: 1 InsertByte- 00:08:08.599 [2024-07-12 14:36:45.340243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.340272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.340369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.340388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.340489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.340508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.340607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.340627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.599 [2024-07-12 14:36:45.340722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.599 [2024-07-12 14:36:45.340740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.599 #69 NEW cov: 12188 ft: 14653 corp: 25/702b lim: 35 exec/s: 69 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:08.857 [2024-07-12 14:36:45.390122] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.857 [2024-07-12 14:36:45.390151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.857 [2024-07-12 14:36:45.390258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.857 [2024-07-12 14:36:45.390276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.857 [2024-07-12 14:36:45.390372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.857 [2024-07-12 14:36:45.390394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.390502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.390521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.858 #70 NEW cov: 12188 ft: 14666 corp: 26/731b lim: 35 exec/s: 70 rss: 73Mb L: 29/35 MS: 1 CrossOver- 00:08:08.858 [2024-07-12 14:36:45.439947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.439976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.440077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.440097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.440188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.440208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.858 #71 NEW cov: 12188 ft: 14674 corp: 27/753b lim: 35 exec/s: 71 rss: 73Mb L: 22/35 MS: 1 EraseBytes- 00:08:08.858 [2024-07-12 14:36:45.511010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.511038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.511142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.511162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.511252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.511274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.511370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.511390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.511489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.511508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.858 #72 NEW cov: 12188 ft: 14708 corp: 28/788b lim: 35 exec/s: 72 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:08.858 [2024-07-12 14:36:45.560102] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.560129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.560227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.560243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.858 #73 NEW cov: 12195 ft: 14735 corp: 29/806b lim: 35 exec/s: 73 rss: 73Mb L: 18/35 MS: 1 EraseBytes- 00:08:08.858 [2024-07-12 14:36:45.620591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.620620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.620715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.620736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.858 [2024-07-12 14:36:45.620824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:08.858 [2024-07-12 14:36:45.620843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.117 #74 NEW cov: 12195 ft: 14743 corp: 30/828b lim: 35 exec/s: 74 rss: 74Mb L: 22/35 MS: 1 ChangeBit- 00:08:09.117 [2024-07-12 14:36:45.681299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.117 [2024-07-12 14:36:45.681327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.117 [2024-07-12 14:36:45.681416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.117 [2024-07-12 14:36:45.681436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.117 [2024-07-12 14:36:45.681532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:09.117 [2024-07-12 14:36:45.681551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.117 #75 NEW cov: 12195 ft: 14753 corp: 31/856b lim: 35 exec/s: 37 rss: 74Mb L: 28/35 MS: 1 CMP- DE: "\005\000"- 00:08:09.118 #75 DONE cov: 12195 ft: 14753 corp: 31/856b lim: 35 exec/s: 37 rss: 74Mb 00:08:09.118 ###### Recommended dictionary. ###### 00:08:09.118 "\005\000" # Uses: 0 00:08:09.118 ###### End of recommended dictionary. ###### 00:08:09.118 Done 75 runs in 2 second(s) 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:09.118 14:36:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:09.118 [2024-07-12 14:36:45.879301] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:09.118 [2024-07-12 14:36:45.879372] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428975 ] 00:08:09.376 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.376 [2024-07-12 14:36:46.089151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.376 [2024-07-12 14:36:46.162043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.635 [2024-07-12 14:36:46.221652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.635 [2024-07-12 14:36:46.237846] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:09.635 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.635 INFO: Seed: 2242471110 00:08:09.635 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:09.635 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:09.635 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:09.635 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.635 #2 INITED exec/s: 0 rss: 65Mb 00:08:09.635 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.635 This may also happen if the target rejected all inputs we tried so far 00:08:09.635 [2024-07-12 14:36:46.293197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.635 [2024-07-12 14:36:46.293227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.893 NEW_FUNC[1/695]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:09.893 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.893 #7 NEW cov: 11865 ft: 11866 corp: 2/14b lim: 35 exec/s: 0 rss: 71Mb L: 13/13 MS: 5 InsertByte-InsertByte-ShuffleBytes-CopyPart-CMP- DE: ">\000\000\000\000\000\000\000"- 00:08:09.893 [2024-07-12 14:36:46.645786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.893 [2024-07-12 14:36:46.645835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.152 #8 NEW cov: 11995 ft: 12495 corp: 3/27b lim: 35 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 CopyPart- 00:08:10.152 [2024-07-12 14:36:46.716422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.152 [2024-07-12 14:36:46.716455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.152 [2024-07-12 14:36:46.716559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000006f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.152 [2024-07-12 14:36:46.716577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.152 #9 NEW cov: 12001 ft: 13109 corp: 4/41b lim: 35 exec/s: 0 rss: 71Mb L: 14/14 MS: 1 InsertByte- 00:08:10.152 [2024-07-12 14:36:46.786879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.152 [2024-07-12 14:36:46.786913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.152 [2024-07-12 14:36:46.787016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.152 [2024-07-12 14:36:46.787036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.152 #10 NEW cov: 12086 ft: 13316 corp: 5/55b lim: 35 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:08:10.152 [2024-07-12 14:36:46.857111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.152 [2024-07-12 14:36:46.857140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.152 #11 NEW cov: 12086 ft: 13394 corp: 6/68b lim: 35 exec/s: 0 rss: 72Mb L: 13/14 MS: 1 CrossOver- 00:08:10.152 [2024-07-12 14:36:46.907819] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.152 [2024-07-12 14:36:46.907845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.152 [2024-07-12 14:36:46.907947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.152 [2024-07-12 14:36:46.907975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.410 #12 NEW cov: 12086 ft: 13446 corp: 7/82b lim: 35 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 ChangeBit- 00:08:10.410 [2024-07-12 14:36:46.967702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:46.967728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.410 #13 NEW cov: 12086 ft: 13534 corp: 8/95b lim: 35 exec/s: 0 rss: 72Mb L: 13/14 MS: 1 ChangeBinInt- 00:08:10.410 [2024-07-12 14:36:47.018624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:47.018650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.410 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:10.410 #16 NEW cov: 12100 ft: 13629 corp: 9/112b lim: 35 exec/s: 0 rss: 72Mb L: 17/17 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:10.410 [2024-07-12 14:36:47.068666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:47.068693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.410 [2024-07-12 14:36:47.068783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:47.068800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.410 #17 NEW cov: 12100 ft: 13685 corp: 10/126b lim: 35 exec/s: 0 rss: 72Mb L: 14/17 MS: 1 ChangeByte- 00:08:10.410 [2024-07-12 14:36:47.118893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:47.118920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.410 [2024-07-12 14:36:47.119032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000016f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:47.119049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.410 #18 NEW cov: 12100 ft: 13714 corp: 11/145b lim: 35 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 CrossOver- 00:08:10.410 [2024-07-12 14:36:47.168923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:47.168949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.410 [2024-07-12 14:36:47.169062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.410 [2024-07-12 14:36:47.169078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.410 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:10.410 #19 NEW cov: 12123 ft: 13790 corp: 12/162b lim: 35 exec/s: 0 rss: 72Mb L: 17/19 MS: 1 InsertRepeatedBytes- 00:08:10.669 [2024-07-12 14:36:47.219479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.219505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.669 [2024-07-12 14:36:47.219611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.219628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.669 [2024-07-12 14:36:47.219729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.219746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.669 #20 NEW cov: 12123 ft: 14010 corp: 13/183b lim: 35 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:08:10.669 [2024-07-12 14:36:47.279568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.279595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.669 [2024-07-12 14:36:47.279700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.279718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.669 #21 NEW cov: 12123 ft: 14023 corp: 14/200b lim: 35 exec/s: 21 rss: 72Mb L: 17/21 MS: 1 CopyPart- 00:08:10.669 [2024-07-12 14:36:47.340691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.340716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.669 [2024-07-12 14:36:47.340926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.340942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.669 [2024-07-12 14:36:47.341048] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.341065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.669 NEW_FUNC[1/1]: 0x4b28e0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:10.669 #22 NEW cov: 12161 ft: 14405 corp: 15/231b lim: 35 exec/s: 22 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:10.669 [2024-07-12 14:36:47.390577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.390602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.669 [2024-07-12 14:36:47.390702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.669 [2024-07-12 14:36:47.390719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.669 #23 NEW cov: 12161 ft: 14428 corp: 16/248b lim: 35 exec/s: 23 rss: 72Mb L: 17/31 MS: 1 ChangeByte- 00:08:10.928 [2024-07-12 14:36:47.461592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.928 [2024-07-12 14:36:47.461618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.928 [2024-07-12 14:36:47.461830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.928 [2024-07-12 14:36:47.461846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.928 [2024-07-12 14:36:47.461932] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.928 [2024-07-12 14:36:47.461949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.928 #24 NEW cov: 12161 ft: 14490 corp: 17/279b lim: 35 exec/s: 24 rss: 72Mb L: 31/31 MS: 1 ChangeBinInt- 00:08:10.928 [2024-07-12 14:36:47.521661] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.928 [2024-07-12 14:36:47.521688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.928 [2024-07-12 14:36:47.521778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000016f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.928 [2024-07-12 14:36:47.521794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.928 [2024-07-12 14:36:47.521883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.928 [2024-07-12 14:36:47.521899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.928 [2024-07-12 14:36:47.521995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000003e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.929 [2024-07-12 14:36:47.522011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:10.929 #30 NEW cov: 12161 ft: 14641 corp: 18/311b lim: 35 exec/s: 30 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:08:10.929 [2024-07-12 14:36:47.581206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.929 [2024-07-12 14:36:47.581232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.929 [2024-07-12 14:36:47.581331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:5 cdw10:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.929 [2024-07-12 14:36:47.581348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.929 #31 NEW cov: 12161 ft: 14663 corp: 19/328b lim: 35 exec/s: 31 rss: 72Mb L: 17/32 MS: 1 CMP- DE: "\016\000"- 00:08:10.929 [2024-07-12 14:36:47.631369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000062b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.929 [2024-07-12 14:36:47.631396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.929 [2024-07-12 14:36:47.631495] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.929 [2024-07-12 14:36:47.631514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.929 #32 NEW cov: 12161 ft: 14667 corp: 20/342b lim: 35 exec/s: 32 rss: 73Mb L: 14/32 MS: 1 ChangeByte- 00:08:10.929 [2024-07-12 14:36:47.691623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000062b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.929 [2024-07-12 14:36:47.691649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.929 [2024-07-12 14:36:47.691743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.929 [2024-07-12 14:36:47.691759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.187 #33 NEW cov: 12161 ft: 14692 corp: 21/356b lim: 35 exec/s: 33 rss: 73Mb L: 14/32 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:08:11.187 [2024-07-12 14:36:47.751859] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.751885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.187 [2024-07-12 14:36:47.751992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000006f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.752009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.187 #34 NEW cov: 12161 ft: 14707 corp: 22/373b lim: 35 exec/s: 34 rss: 73Mb L: 17/32 MS: 1 CrossOver- 00:08:11.187 [2024-07-12 14:36:47.801989] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.802016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.187 [2024-07-12 14:36:47.802115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000016f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.802132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.187 #35 NEW cov: 12161 ft: 14718 corp: 23/392b lim: 35 exec/s: 35 rss: 73Mb L: 19/32 MS: 1 PersAutoDict- DE: "\016\000"- 00:08:11.187 [2024-07-12 14:36:47.852864] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.852892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.187 [2024-07-12 14:36:47.852992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.853011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.187 [2024-07-12 14:36:47.853116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.853133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.187 [2024-07-12 14:36:47.853233] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.853251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.187 #36 NEW cov: 12161 ft: 14731 corp: 24/420b lim: 35 exec/s: 36 rss: 73Mb L: 28/32 MS: 1 CrossOver- 00:08:11.187 [2024-07-12 14:36:47.922479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.922506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.187 [2024-07-12 14:36:47.922604] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000016f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.922620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.187 #37 NEW cov: 12161 ft: 14762 corp: 25/439b lim: 35 exec/s: 37 rss: 73Mb L: 19/32 MS: 1 ChangeByte- 00:08:11.187 [2024-07-12 14:36:47.972919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.187 [2024-07-12 14:36:47.972947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.446 #38 NEW cov: 12161 ft: 14775 corp: 26/457b lim: 35 exec/s: 38 rss: 73Mb L: 18/32 MS: 1 InsertByte- 00:08:11.446 [2024-07-12 14:36:48.033021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.446 [2024-07-12 14:36:48.033048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.446 #39 NEW cov: 12161 ft: 14800 corp: 27/475b lim: 35 exec/s: 39 rss: 73Mb L: 18/32 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:08:11.446 [2024-07-12 14:36:48.093160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.446 [2024-07-12 14:36:48.093189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.446 [2024-07-12 14:36:48.093282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.446 [2024-07-12 14:36:48.093297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.446 #40 NEW cov: 12161 ft: 14812 corp: 28/489b lim: 35 exec/s: 40 rss: 73Mb L: 14/32 MS: 1 InsertByte- 00:08:11.446 [2024-07-12 14:36:48.153737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000003e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.446 [2024-07-12 14:36:48.153764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.446 [2024-07-12 14:36:48.153874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.446 [2024-07-12 14:36:48.153892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.446 [2024-07-12 14:36:48.153989] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.446 [2024-07-12 14:36:48.154006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.446 #41 NEW cov: 12161 ft: 14826 corp: 29/510b lim: 35 exec/s: 41 rss: 73Mb L: 21/32 MS: 1 CMP- DE: "\001\016"- 00:08:11.446 [2024-07-12 14:36:48.223708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.446 [2024-07-12 14:36:48.223734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.705 #42 NEW cov: 12161 ft: 14875 corp: 30/528b lim: 35 exec/s: 42 rss: 73Mb L: 18/32 MS: 1 ChangeByte- 00:08:11.705 [2024-07-12 14:36:48.283745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.705 [2024-07-12 14:36:48.283771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.705 [2024-07-12 14:36:48.283869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000006f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.705 [2024-07-12 14:36:48.283887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.705 #43 NEW cov: 12161 ft: 14894 corp: 31/545b lim: 35 exec/s: 21 rss: 73Mb L: 17/32 MS: 1 PersAutoDict- DE: ">\000\000\000\000\000\000\000"- 00:08:11.705 #43 DONE cov: 12161 ft: 14894 corp: 31/545b lim: 35 exec/s: 21 rss: 73Mb 00:08:11.705 ###### Recommended dictionary. ###### 00:08:11.705 ">\000\000\000\000\000\000\000" # Uses: 5 00:08:11.705 "\016\000" # Uses: 1 00:08:11.705 "\001\016" # Uses: 0 00:08:11.705 ###### End of recommended dictionary. ###### 00:08:11.705 Done 43 runs in 2 second(s) 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:11.705 14:36:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:11.705 [2024-07-12 14:36:48.491781] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:11.705 [2024-07-12 14:36:48.491850] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429331 ] 00:08:11.964 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.964 [2024-07-12 14:36:48.705355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.222 [2024-07-12 14:36:48.782027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.222 [2024-07-12 14:36:48.841686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.222 [2024-07-12 14:36:48.857878] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:12.222 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.222 INFO: Seed: 567521001 00:08:12.222 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:12.222 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:12.222 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:12.222 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.222 #2 INITED exec/s: 0 rss: 64Mb 00:08:12.222 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.222 This may also happen if the target rejected all inputs we tried so far 00:08:12.222 [2024-07-12 14:36:48.935567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069951455231 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.222 [2024-07-12 14:36:48.935610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.222 [2024-07-12 14:36:48.935707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.222 [2024-07-12 14:36:48.935723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.222 [2024-07-12 14:36:48.935842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.222 [2024-07-12 14:36:48.935861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.480 NEW_FUNC[1/694]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:12.480 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.480 #11 NEW cov: 11954 ft: 11969 corp: 2/74b lim: 105 exec/s: 0 rss: 71Mb L: 73/73 MS: 4 ShuffleBytes-ChangeByte-CMP-InsertRepeatedBytes- DE: "\377\377\377\377\377\377\377\005"- 00:08:12.739 [2024-07-12 14:36:49.276018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.276062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.739 [2024-07-12 14:36:49.276120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.276136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.739 [2024-07-12 14:36:49.276223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.276243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.739 NEW_FUNC[1/2]: 0x1d953c0 in thread_update_stats /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:928 00:08:12.739 NEW_FUNC[2/2]: 0x1d97270 in spdk_thread_get_last_tsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1324 00:08:12.739 #13 NEW cov: 12099 ft: 12456 corp: 3/152b lim: 105 exec/s: 0 rss: 71Mb L: 78/78 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:12.739 [2024-07-12 14:36:49.335902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.335938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.739 [2024-07-12 14:36:49.335998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.336017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.739 [2024-07-12 14:36:49.336086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.336101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.739 #14 NEW cov: 12105 ft: 12848 corp: 4/230b lim: 105 exec/s: 0 rss: 71Mb L: 78/78 MS: 1 ChangeBit- 00:08:12.739 [2024-07-12 14:36:49.396116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071461404671 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.396146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.739 [2024-07-12 14:36:49.396214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.396233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.739 [2024-07-12 14:36:49.396314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.396334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.739 #16 NEW cov: 12190 ft: 13065 corp: 5/312b lim: 105 exec/s: 0 rss: 72Mb L: 82/82 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:12.739 [2024-07-12 14:36:49.446536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.739 [2024-07-12 14:36:49.446564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.739 [2024-07-12 14:36:49.446657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.740 [2024-07-12 14:36:49.446674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.740 [2024-07-12 14:36:49.446757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.740 [2024-07-12 14:36:49.446775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.740 [2024-07-12 14:36:49.446869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8753160913407277433 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.740 [2024-07-12 14:36:49.446885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.740 #17 NEW cov: 12190 ft: 13607 corp: 6/412b lim: 105 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:08:12.740 [2024-07-12 14:36:49.496422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.740 [2024-07-12 14:36:49.496449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.740 [2024-07-12 14:36:49.496512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.740 [2024-07-12 14:36:49.496537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.740 [2024-07-12 14:36:49.496607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.740 [2024-07-12 14:36:49.496627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.740 #18 NEW cov: 12190 ft: 13659 corp: 7/490b lim: 105 exec/s: 0 rss: 72Mb L: 78/100 MS: 1 CrossOver- 00:08:12.998 [2024-07-12 14:36:49.546584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.998 [2024-07-12 14:36:49.546612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.998 [2024-07-12 14:36:49.546682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.546698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.546772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.546788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.999 #19 NEW cov: 12190 ft: 13709 corp: 8/568b lim: 105 exec/s: 0 rss: 72Mb L: 78/100 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\005"- 00:08:12.999 [2024-07-12 14:36:49.606585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.606612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.606694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.606709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.999 #20 NEW cov: 12190 ft: 14035 corp: 9/624b lim: 105 exec/s: 0 rss: 72Mb L: 56/100 MS: 1 EraseBytes- 00:08:12.999 [2024-07-12 14:36:49.667251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.667277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.667357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.667373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.667437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160914766231929 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.667453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.667546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.667564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.999 #26 NEW cov: 12190 ft: 14083 corp: 10/727b lim: 105 exec/s: 0 rss: 72Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:08:12.999 [2024-07-12 14:36:49.727126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.727153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.727226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.727242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.727316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.727332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.999 #27 NEW cov: 12190 ft: 14147 corp: 11/799b lim: 105 exec/s: 0 rss: 72Mb L: 72/103 MS: 1 EraseBytes- 00:08:12.999 [2024-07-12 14:36:49.777395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071461404671 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.777423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.777493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.777512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.999 [2024-07-12 14:36:49.777591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.999 [2024-07-12 14:36:49.777609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.257 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:13.257 #28 NEW cov: 12213 ft: 14208 corp: 12/882b lim: 105 exec/s: 0 rss: 72Mb L: 83/103 MS: 1 CopyPart- 00:08:13.257 [2024-07-12 14:36:49.837836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071461404671 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.257 [2024-07-12 14:36:49.837864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.837938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.837959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.838026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.838043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.838140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8608480570021738359 len:30584 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.838158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.258 #29 NEW cov: 12213 ft: 14241 corp: 13/984b lim: 105 exec/s: 0 rss: 72Mb L: 102/103 MS: 1 InsertRepeatedBytes- 00:08:13.258 [2024-07-12 14:36:49.898061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.898092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.898158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.898176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.898238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.898256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.898344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744071452588543 len:1402 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.898363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.258 #30 NEW cov: 12213 ft: 14256 corp: 14/1070b lim: 105 exec/s: 30 rss: 72Mb L: 86/103 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\005"- 00:08:13.258 [2024-07-12 14:36:49.948165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.948192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.948274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.948289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.948366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8719102441225288057 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.948382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:49.948473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744071452588543 len:1402 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:49.948488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.258 #31 NEW cov: 12213 ft: 14274 corp: 15/1156b lim: 105 exec/s: 31 rss: 72Mb L: 86/103 MS: 1 ChangeByte- 00:08:13.258 [2024-07-12 14:36:50.008163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:50.008193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:50.008261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:50.008281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.258 [2024-07-12 14:36:50.008345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.258 [2024-07-12 14:36:50.008362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.258 #32 NEW cov: 12213 ft: 14277 corp: 16/1234b lim: 105 exec/s: 32 rss: 72Mb L: 78/103 MS: 1 ChangeBit- 00:08:13.517 [2024-07-12 14:36:50.058333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071461404671 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.058369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.058432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.058448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.058504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.058522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.517 #33 NEW cov: 12213 ft: 14398 corp: 17/1317b lim: 105 exec/s: 33 rss: 72Mb L: 83/103 MS: 1 ChangeByte- 00:08:13.517 [2024-07-12 14:36:50.108824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.108856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.108930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.108947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.109026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.109042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.109132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.109151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.517 #34 NEW cov: 12213 ft: 14477 corp: 18/1420b lim: 105 exec/s: 34 rss: 72Mb L: 103/103 MS: 1 CrossOver- 00:08:13.517 [2024-07-12 14:36:50.168638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.168664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.168746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.168767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.168859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.168875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.517 #35 NEW cov: 12213 ft: 14500 corp: 19/1498b lim: 105 exec/s: 35 rss: 72Mb L: 78/103 MS: 1 ChangeBinInt- 00:08:13.517 [2024-07-12 14:36:50.218879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.218907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.218973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.517 [2024-07-12 14:36:50.218992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.517 [2024-07-12 14:36:50.219047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.518 [2024-07-12 14:36:50.219063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.518 #36 NEW cov: 12213 ft: 14516 corp: 20/1576b lim: 105 exec/s: 36 rss: 73Mb L: 78/103 MS: 1 ChangeBinInt- 00:08:13.518 [2024-07-12 14:36:50.279312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.518 [2024-07-12 14:36:50.279344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.518 [2024-07-12 14:36:50.279427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.518 [2024-07-12 14:36:50.279451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.518 [2024-07-12 14:36:50.279547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8719102441225288057 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.518 [2024-07-12 14:36:50.279560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.518 [2024-07-12 14:36:50.279656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8753160393716234617 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.518 [2024-07-12 14:36:50.279671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.776 #37 NEW cov: 12213 ft: 14573 corp: 21/1662b lim: 105 exec/s: 37 rss: 73Mb L: 86/103 MS: 1 CopyPart- 00:08:13.776 [2024-07-12 14:36:50.349571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071461404671 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.349601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.349675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.349689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.349774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.349791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.349886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8608480570021738359 len:30584 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.349906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.776 #43 NEW cov: 12213 ft: 14614 corp: 22/1764b lim: 105 exec/s: 43 rss: 73Mb L: 102/103 MS: 1 ChangeBit- 00:08:13.776 [2024-07-12 14:36:50.409464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.409491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.409551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.409568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.409668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.409690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.776 #44 NEW cov: 12213 ft: 14633 corp: 23/1842b lim: 105 exec/s: 44 rss: 73Mb L: 78/103 MS: 1 CopyPart- 00:08:13.776 [2024-07-12 14:36:50.459647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160393716235129 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.459672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.459749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31232 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.459768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.459840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.459857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.776 #45 NEW cov: 12213 ft: 14647 corp: 24/1920b lim: 105 exec/s: 45 rss: 73Mb L: 78/103 MS: 1 ChangeByte- 00:08:13.776 [2024-07-12 14:36:50.510065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.510096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.510171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.510190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.510260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.510279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.510362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8753160913407277433 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.510379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.776 #46 NEW cov: 12213 ft: 14667 corp: 25/2020b lim: 105 exec/s: 46 rss: 73Mb L: 100/103 MS: 1 ChangeBit- 00:08:13.776 [2024-07-12 14:36:50.559766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.559794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.776 [2024-07-12 14:36:50.559860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.776 [2024-07-12 14:36:50.559878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.035 #47 NEW cov: 12213 ft: 14678 corp: 26/2076b lim: 105 exec/s: 47 rss: 73Mb L: 56/103 MS: 1 CMP- DE: "\377\000\000\000"- 00:08:14.035 [2024-07-12 14:36:50.630786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071461404671 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.630814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.630911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.630929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.631022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.631043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.631128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8608480570021738359 len:30584 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.631150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.631236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744071418902527 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.631253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:14.035 #48 NEW cov: 12213 ft: 14727 corp: 27/2181b lim: 105 exec/s: 48 rss: 73Mb L: 105/105 MS: 1 CopyPart- 00:08:14.035 [2024-07-12 14:36:50.680147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160913407277945 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.680176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.680241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.680259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.035 #49 NEW cov: 12213 ft: 14733 corp: 28/2237b lim: 105 exec/s: 49 rss: 73Mb L: 56/105 MS: 1 CrossOver- 00:08:14.035 [2024-07-12 14:36:50.730560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1519143629599610133 len:5398 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.730589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.730681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1519143629599610133 len:5398 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.730701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.730775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1519143629599610133 len:5398 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.730796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.035 #51 NEW cov: 12213 ft: 14749 corp: 29/2318b lim: 105 exec/s: 51 rss: 73Mb L: 81/105 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:14.035 [2024-07-12 14:36:50.800844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:8753160393716235129 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.800876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.800934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:8753308247965399417 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.800954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.035 [2024-07-12 14:36:50.801019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.035 [2024-07-12 14:36:50.801037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.294 #52 NEW cov: 12213 ft: 14862 corp: 30/2397b lim: 105 exec/s: 52 rss: 73Mb L: 79/105 MS: 1 InsertByte- 00:08:14.294 [2024-07-12 14:36:50.871101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744002741927935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.294 [2024-07-12 14:36:50.871130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.294 [2024-07-12 14:36:50.871196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.294 [2024-07-12 14:36:50.871211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.294 [2024-07-12 14:36:50.871288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.294 [2024-07-12 14:36:50.871304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.294 #58 NEW cov: 12213 ft: 14884 corp: 31/2480b lim: 105 exec/s: 58 rss: 73Mb L: 83/105 MS: 1 ChangeBit- 00:08:14.294 [2024-07-12 14:36:50.921532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071461404671 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.294 [2024-07-12 14:36:50.921560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.294 [2024-07-12 14:36:50.921661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.294 [2024-07-12 14:36:50.921681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.294 [2024-07-12 14:36:50.921752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.294 [2024-07-12 14:36:50.921767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.294 [2024-07-12 14:36:50.921853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8608480570021738359 len:30584 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.294 [2024-07-12 14:36:50.921870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.294 #59 NEW cov: 12213 ft: 14886 corp: 32/2582b lim: 105 exec/s: 29 rss: 73Mb L: 102/105 MS: 1 ChangeBinInt- 00:08:14.294 #59 DONE cov: 12213 ft: 14886 corp: 32/2582b lim: 105 exec/s: 29 rss: 73Mb 00:08:14.294 ###### Recommended dictionary. ###### 00:08:14.294 "\377\377\377\377\377\377\377\005" # Uses: 3 00:08:14.294 "\377\000\000\000" # Uses: 1 00:08:14.294 ###### End of recommended dictionary. ###### 00:08:14.294 Done 59 runs in 2 second(s) 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:14.294 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:14.556 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:14.556 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.556 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.556 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.556 14:36:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:14.556 [2024-07-12 14:36:51.116800] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:14.556 [2024-07-12 14:36:51.116871] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429690 ] 00:08:14.556 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.556 [2024-07-12 14:36:51.334914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.816 [2024-07-12 14:36:51.411015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.816 [2024-07-12 14:36:51.470535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.816 [2024-07-12 14:36:51.486713] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:14.816 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.816 INFO: Seed: 3194521680 00:08:14.816 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:14.816 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:14.816 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:14.816 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.816 #2 INITED exec/s: 0 rss: 65Mb 00:08:14.816 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.816 This may also happen if the target rejected all inputs we tried so far 00:08:14.816 [2024-07-12 14:36:51.545943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.816 [2024-07-12 14:36:51.545973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.816 [2024-07-12 14:36:51.546028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.816 [2024-07-12 14:36:51.546046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.382 NEW_FUNC[1/697]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:15.382 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.382 #15 NEW cov: 11984 ft: 11978 corp: 2/72b lim: 120 exec/s: 0 rss: 72Mb L: 71/71 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:08:15.382 [2024-07-12 14:36:51.887178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:51.887245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:51.887327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4238681749585920 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:51.887357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:51.887434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:51.887462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.382 #21 NEW cov: 12120 ft: 12938 corp: 3/151b lim: 120 exec/s: 0 rss: 72Mb L: 79/79 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:08:15.382 [2024-07-12 14:36:51.946874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:4082 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:51.946905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:51.946956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:51.946974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.382 #22 NEW cov: 12126 ft: 13261 corp: 4/222b lim: 120 exec/s: 0 rss: 72Mb L: 71/79 MS: 1 ChangeBinInt- 00:08:15.382 [2024-07-12 14:36:51.986908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:51.986940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.382 #23 NEW cov: 12211 ft: 14331 corp: 5/267b lim: 120 exec/s: 0 rss: 72Mb L: 45/79 MS: 1 EraseBytes- 00:08:15.382 [2024-07-12 14:36:52.037186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.037216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:52.037274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592319491855 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.037290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.382 #26 NEW cov: 12211 ft: 14480 corp: 6/317b lim: 120 exec/s: 0 rss: 72Mb L: 50/79 MS: 3 CopyPart-ChangeByte-CrossOver- 00:08:15.382 [2024-07-12 14:36:52.077321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.077356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:52.077413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.077427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.382 #27 NEW cov: 12211 ft: 14547 corp: 7/369b lim: 120 exec/s: 0 rss: 72Mb L: 52/79 MS: 1 EraseBytes- 00:08:15.382 [2024-07-12 14:36:52.117405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.117434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:52.117489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.117506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.382 #28 NEW cov: 12211 ft: 14605 corp: 8/431b lim: 120 exec/s: 0 rss: 72Mb L: 62/79 MS: 1 EraseBytes- 00:08:15.382 [2024-07-12 14:36:52.157705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.157734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:52.157772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.157788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.382 [2024-07-12 14:36:52.157842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:4089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.382 [2024-07-12 14:36:52.157856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.640 #34 NEW cov: 12211 ft: 14724 corp: 9/503b lim: 120 exec/s: 0 rss: 72Mb L: 72/79 MS: 1 InsertByte- 00:08:15.640 [2024-07-12 14:36:52.197669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.197698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.640 [2024-07-12 14:36:52.197740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.197756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.640 #35 NEW cov: 12211 ft: 14734 corp: 10/556b lim: 120 exec/s: 0 rss: 72Mb L: 53/79 MS: 1 CopyPart- 00:08:15.640 [2024-07-12 14:36:52.247907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.247935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.640 [2024-07-12 14:36:52.247982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.247998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.640 [2024-07-12 14:36:52.248054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.248074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.640 #36 NEW cov: 12211 ft: 14792 corp: 11/628b lim: 120 exec/s: 0 rss: 72Mb L: 72/79 MS: 1 InsertByte- 00:08:15.640 [2024-07-12 14:36:52.287739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.287768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.640 #37 NEW cov: 12211 ft: 14869 corp: 12/673b lim: 120 exec/s: 0 rss: 72Mb L: 45/79 MS: 1 ChangeBinInt- 00:08:15.640 [2024-07-12 14:36:52.338151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.338179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.640 [2024-07-12 14:36:52.338220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.338237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.640 [2024-07-12 14:36:52.338293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.338310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.640 #38 NEW cov: 12211 ft: 14894 corp: 13/752b lim: 120 exec/s: 0 rss: 72Mb L: 79/79 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:08:15.640 [2024-07-12 14:36:52.378125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.378153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.640 [2024-07-12 14:36:52.378207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.640 [2024-07-12 14:36:52.378224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.640 #39 NEW cov: 12211 ft: 14957 corp: 14/814b lim: 120 exec/s: 0 rss: 72Mb L: 62/79 MS: 1 ShuffleBytes- 00:08:15.897 [2024-07-12 14:36:52.428454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.897 [2024-07-12 14:36:52.428483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.897 [2024-07-12 14:36:52.428521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.897 [2024-07-12 14:36:52.428542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.897 [2024-07-12 14:36:52.428597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.897 [2024-07-12 14:36:52.428613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.897 #40 NEW cov: 12211 ft: 14977 corp: 15/886b lim: 120 exec/s: 0 rss: 72Mb L: 72/79 MS: 1 ChangeBinInt- 00:08:15.897 [2024-07-12 14:36:52.478423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.897 [2024-07-12 14:36:52.478455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.897 [2024-07-12 14:36:52.478510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.897 [2024-07-12 14:36:52.478531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.898 #41 NEW cov: 12211 ft: 15028 corp: 16/949b lim: 120 exec/s: 0 rss: 72Mb L: 63/79 MS: 1 InsertByte- 00:08:15.898 [2024-07-12 14:36:52.518654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.518682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.518723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.518741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.518797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:4089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.518814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.898 #42 NEW cov: 12211 ft: 15075 corp: 17/1021b lim: 120 exec/s: 42 rss: 73Mb L: 72/79 MS: 1 ChangeByte- 00:08:15.898 [2024-07-12 14:36:52.568977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.569005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.569052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.569068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.569121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10851025925711500950 len:38551 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.569139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.569192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1085103071586326031 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.569207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:15.898 #43 NEW cov: 12211 ft: 15449 corp: 18/1130b lim: 120 exec/s: 43 rss: 73Mb L: 109/109 MS: 1 InsertRepeatedBytes- 00:08:15.898 [2024-07-12 14:36:52.618994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:4096 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.619023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.619060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102596613472015 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.619077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.619133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.619153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:15.898 #44 NEW cov: 12211 ft: 15466 corp: 19/1208b lim: 120 exec/s: 44 rss: 73Mb L: 78/109 MS: 1 InsertRepeatedBytes- 00:08:15.898 [2024-07-12 14:36:52.669158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.669187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.669225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.669240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.898 [2024-07-12 14:36:52.669296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:4089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:15.898 [2024-07-12 14:36:52.669313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.155 #45 NEW cov: 12211 ft: 15488 corp: 20/1280b lim: 120 exec/s: 45 rss: 73Mb L: 72/109 MS: 1 ChangeBit- 00:08:16.155 [2024-07-12 14:36:52.708947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150100 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.155 [2024-07-12 14:36:52.708975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.155 #46 NEW cov: 12211 ft: 15509 corp: 21/1325b lim: 120 exec/s: 46 rss: 73Mb L: 45/109 MS: 1 ChangeBinInt- 00:08:16.155 [2024-07-12 14:36:52.749339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.155 [2024-07-12 14:36:52.749368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.155 [2024-07-12 14:36:52.749404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.749420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.156 [2024-07-12 14:36:52.749474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:4089 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.749491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.156 #47 NEW cov: 12211 ft: 15550 corp: 22/1397b lim: 120 exec/s: 47 rss: 73Mb L: 72/109 MS: 1 ChangeBinInt- 00:08:16.156 [2024-07-12 14:36:52.789310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.789339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.156 [2024-07-12 14:36:52.789396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.789413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.156 #48 NEW cov: 12211 ft: 15631 corp: 23/1449b lim: 120 exec/s: 48 rss: 73Mb L: 52/109 MS: 1 ChangeBinInt- 00:08:16.156 [2024-07-12 14:36:52.829281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:4082 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.829312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.156 #49 NEW cov: 12211 ft: 15647 corp: 24/1493b lim: 120 exec/s: 49 rss: 73Mb L: 44/109 MS: 1 EraseBytes- 00:08:16.156 [2024-07-12 14:36:52.879632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.879660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.156 [2024-07-12 14:36:52.879724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.879740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.156 #50 NEW cov: 12211 ft: 15648 corp: 25/1550b lim: 120 exec/s: 50 rss: 73Mb L: 57/109 MS: 1 InsertRepeatedBytes- 00:08:16.156 [2024-07-12 14:36:52.919889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.919917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.156 [2024-07-12 14:36:52.919956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.919972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.156 [2024-07-12 14:36:52.920026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102661290626831 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.156 [2024-07-12 14:36:52.920043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.156 #51 NEW cov: 12211 ft: 15690 corp: 26/1622b lim: 120 exec/s: 51 rss: 73Mb L: 72/109 MS: 1 ChangeBit- 00:08:16.413 [2024-07-12 14:36:52.959624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:52.959653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.414 #57 NEW cov: 12211 ft: 15714 corp: 27/1667b lim: 120 exec/s: 57 rss: 73Mb L: 45/109 MS: 1 ChangeBit- 00:08:16.414 [2024-07-12 14:36:53.000076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.000104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.414 [2024-07-12 14:36:53.000144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.000160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.414 [2024-07-12 14:36:53.000215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.000232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.414 #58 NEW cov: 12211 ft: 15716 corp: 28/1746b lim: 120 exec/s: 58 rss: 73Mb L: 79/109 MS: 1 CrossOver- 00:08:16.414 [2024-07-12 14:36:53.050061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.050088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.414 [2024-07-12 14:36:53.050137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592319491855 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.050153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.414 #59 NEW cov: 12211 ft: 15749 corp: 29/1797b lim: 120 exec/s: 59 rss: 73Mb L: 51/109 MS: 1 InsertByte- 00:08:16.414 [2024-07-12 14:36:53.100040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.100068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.414 #60 NEW cov: 12211 ft: 15760 corp: 30/1827b lim: 120 exec/s: 60 rss: 73Mb L: 30/109 MS: 1 CrossOver- 00:08:16.414 [2024-07-12 14:36:53.140144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.140172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.414 #61 NEW cov: 12211 ft: 15773 corp: 31/1872b lim: 120 exec/s: 61 rss: 73Mb L: 45/109 MS: 1 ChangeBinInt- 00:08:16.414 [2024-07-12 14:36:53.190634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3984 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.190663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.414 [2024-07-12 14:36:53.190703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.190720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.414 [2024-07-12 14:36:53.190774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.414 [2024-07-12 14:36:53.190789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.672 #62 NEW cov: 12211 ft: 15845 corp: 32/1951b lim: 120 exec/s: 62 rss: 73Mb L: 79/109 MS: 1 ChangeBit- 00:08:16.672 [2024-07-12 14:36:53.240653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.240683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.672 [2024-07-12 14:36:53.240729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.240743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.672 #63 NEW cov: 12211 ft: 15861 corp: 33/2014b lim: 120 exec/s: 63 rss: 73Mb L: 63/109 MS: 1 InsertByte- 00:08:16.672 [2024-07-12 14:36:53.280890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.280919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.672 [2024-07-12 14:36:53.280960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.280976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.672 [2024-07-12 14:36:53.281033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.281054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.672 #64 NEW cov: 12211 ft: 15910 corp: 34/2086b lim: 120 exec/s: 64 rss: 73Mb L: 72/109 MS: 1 CopyPart- 00:08:16.672 [2024-07-12 14:36:53.330845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.330873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.672 [2024-07-12 14:36:53.330917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.330932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.672 #65 NEW cov: 12211 ft: 15915 corp: 35/2139b lim: 120 exec/s: 65 rss: 73Mb L: 53/109 MS: 1 InsertByte- 00:08:16.672 [2024-07-12 14:36:53.381171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:4096 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.381201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.672 [2024-07-12 14:36:53.381248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102596613472015 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.381264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.672 [2024-07-12 14:36:53.381318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.381336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.672 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:16.672 #66 NEW cov: 12234 ft: 15922 corp: 36/2217b lim: 120 exec/s: 66 rss: 74Mb L: 78/109 MS: 1 ShuffleBytes- 00:08:16.672 [2024-07-12 14:36:53.431118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.431148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.672 [2024-07-12 14:36:53.431192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.672 [2024-07-12 14:36:53.431206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.672 #67 NEW cov: 12234 ft: 15938 corp: 37/2273b lim: 120 exec/s: 67 rss: 74Mb L: 56/109 MS: 1 CMP- DE: "\377\377\000\000"- 00:08:16.931 [2024-07-12 14:36:53.471266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.931 [2024-07-12 14:36:53.471296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.931 [2024-07-12 14:36:53.471349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.931 [2024-07-12 14:36:53.471366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.931 #68 NEW cov: 12234 ft: 15970 corp: 38/2329b lim: 120 exec/s: 68 rss: 74Mb L: 56/109 MS: 1 CrossOver- 00:08:16.931 [2024-07-12 14:36:53.511359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.931 [2024-07-12 14:36:53.511389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.931 [2024-07-12 14:36:53.511446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1085102592571150095 len:3856 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:16.931 [2024-07-12 14:36:53.511463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.931 #69 NEW cov: 12234 ft: 15986 corp: 39/2381b lim: 120 exec/s: 34 rss: 74Mb L: 52/109 MS: 1 ChangeBinInt- 00:08:16.931 #69 DONE cov: 12234 ft: 15986 corp: 39/2381b lim: 120 exec/s: 34 rss: 74Mb 00:08:16.931 ###### Recommended dictionary. ###### 00:08:16.931 "\000\000\000\000\000\000\000\000" # Uses: 1 00:08:16.931 "\377\377\000\000" # Uses: 0 00:08:16.931 ###### End of recommended dictionary. ###### 00:08:16.931 Done 69 runs in 2 second(s) 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:16.931 14:36:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:16.931 [2024-07-12 14:36:53.716516] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:16.931 [2024-07-12 14:36:53.716594] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430046 ] 00:08:17.189 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.189 [2024-07-12 14:36:53.926304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.447 [2024-07-12 14:36:54.000078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.447 [2024-07-12 14:36:54.059548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.447 [2024-07-12 14:36:54.075744] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:17.447 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.447 INFO: Seed: 1488556652 00:08:17.447 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:17.447 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:17.447 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:17.447 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.447 #2 INITED exec/s: 0 rss: 65Mb 00:08:17.447 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.447 This may also happen if the target rejected all inputs we tried so far 00:08:17.447 [2024-07-12 14:36:54.135063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:17.447 [2024-07-12 14:36:54.135093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.447 [2024-07-12 14:36:54.135127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:17.447 [2024-07-12 14:36:54.135141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.447 [2024-07-12 14:36:54.135191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:17.447 [2024-07-12 14:36:54.135205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.447 [2024-07-12 14:36:54.135253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:17.447 [2024-07-12 14:36:54.135267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.704 NEW_FUNC[1/694]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:17.704 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:17.704 #14 NEW cov: 11902 ft: 11929 corp: 2/87b lim: 100 exec/s: 0 rss: 72Mb L: 86/86 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:17.704 [2024-07-12 14:36:54.476227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:17.704 [2024-07-12 14:36:54.476291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.704 [2024-07-12 14:36:54.476372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:17.704 [2024-07-12 14:36:54.476399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.704 [2024-07-12 14:36:54.476478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:17.704 [2024-07-12 14:36:54.476504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.704 [2024-07-12 14:36:54.476592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:17.704 [2024-07-12 14:36:54.476619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.962 NEW_FUNC[1/1]: 0x17a5530 in _nvme_qpair_complete_abort_queued_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:593 00:08:17.962 #15 NEW cov: 12063 ft: 12559 corp: 3/173b lim: 100 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 ChangeByte- 00:08:17.962 [2024-07-12 14:36:54.535803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:17.962 [2024-07-12 14:36:54.535832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.962 #19 NEW cov: 12069 ft: 13215 corp: 4/209b lim: 100 exec/s: 0 rss: 72Mb L: 36/86 MS: 4 ShuffleBytes-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:08:17.962 [2024-07-12 14:36:54.576320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:17.962 [2024-07-12 14:36:54.576348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.962 [2024-07-12 14:36:54.576395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:17.962 [2024-07-12 14:36:54.576409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.962 [2024-07-12 14:36:54.576462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:17.962 [2024-07-12 14:36:54.576478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.962 [2024-07-12 14:36:54.576538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:17.962 [2024-07-12 14:36:54.576553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.962 #20 NEW cov: 12154 ft: 13452 corp: 5/295b lim: 100 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 ChangeByte- 00:08:17.962 [2024-07-12 14:36:54.626473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:17.962 [2024-07-12 14:36:54.626502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.962 [2024-07-12 14:36:54.626592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:17.962 [2024-07-12 14:36:54.626609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.962 [2024-07-12 14:36:54.626665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:17.962 [2024-07-12 14:36:54.626681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.962 [2024-07-12 14:36:54.626738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:17.962 [2024-07-12 14:36:54.626753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.962 #21 NEW cov: 12154 ft: 13646 corp: 6/381b lim: 100 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 CMP- DE: "\015\000"- 00:08:17.962 [2024-07-12 14:36:54.666228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:17.962 [2024-07-12 14:36:54.666254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.962 #22 NEW cov: 12154 ft: 13749 corp: 7/408b lim: 100 exec/s: 0 rss: 72Mb L: 27/86 MS: 1 EraseBytes- 00:08:17.963 [2024-07-12 14:36:54.716707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:17.963 [2024-07-12 14:36:54.716735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.963 [2024-07-12 14:36:54.716775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:17.963 [2024-07-12 14:36:54.716789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.963 [2024-07-12 14:36:54.716843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:17.963 [2024-07-12 14:36:54.716857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.963 [2024-07-12 14:36:54.716911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:17.963 [2024-07-12 14:36:54.716926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.220 #23 NEW cov: 12154 ft: 13816 corp: 8/494b lim: 100 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 ChangeBit- 00:08:18.220 [2024-07-12 14:36:54.766715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.220 [2024-07-12 14:36:54.766742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.766784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.220 [2024-07-12 14:36:54.766799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.766853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.220 [2024-07-12 14:36:54.766868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.220 #24 NEW cov: 12154 ft: 14096 corp: 9/567b lim: 100 exec/s: 0 rss: 72Mb L: 73/86 MS: 1 CrossOver- 00:08:18.220 [2024-07-12 14:36:54.817007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.220 [2024-07-12 14:36:54.817034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.817083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.220 [2024-07-12 14:36:54.817098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.817151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.220 [2024-07-12 14:36:54.817164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.817219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.220 [2024-07-12 14:36:54.817233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.220 #30 NEW cov: 12154 ft: 14152 corp: 10/653b lim: 100 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 ChangeBit- 00:08:18.220 [2024-07-12 14:36:54.856719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.220 [2024-07-12 14:36:54.856746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.220 #31 NEW cov: 12154 ft: 14187 corp: 11/680b lim: 100 exec/s: 0 rss: 72Mb L: 27/86 MS: 1 CrossOver- 00:08:18.220 [2024-07-12 14:36:54.906878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.220 [2024-07-12 14:36:54.906904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.220 #32 NEW cov: 12154 ft: 14265 corp: 12/703b lim: 100 exec/s: 0 rss: 72Mb L: 23/86 MS: 1 EraseBytes- 00:08:18.220 [2024-07-12 14:36:54.947095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.220 [2024-07-12 14:36:54.947121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.947164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.220 [2024-07-12 14:36:54.947178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.220 #33 NEW cov: 12154 ft: 14513 corp: 13/756b lim: 100 exec/s: 0 rss: 72Mb L: 53/86 MS: 1 InsertRepeatedBytes- 00:08:18.220 [2024-07-12 14:36:54.987426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.220 [2024-07-12 14:36:54.987451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.987497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.220 [2024-07-12 14:36:54.987512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.987586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.220 [2024-07-12 14:36:54.987601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.220 [2024-07-12 14:36:54.987657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.220 [2024-07-12 14:36:54.987672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.477 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:18.477 #34 NEW cov: 12177 ft: 14635 corp: 14/842b lim: 100 exec/s: 0 rss: 73Mb L: 86/86 MS: 1 CrossOver- 00:08:18.477 [2024-07-12 14:36:55.037468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.478 [2024-07-12 14:36:55.037495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.037545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.478 [2024-07-12 14:36:55.037561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.037631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.478 [2024-07-12 14:36:55.037647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.478 #35 NEW cov: 12177 ft: 14684 corp: 15/915b lim: 100 exec/s: 0 rss: 73Mb L: 73/86 MS: 1 ShuffleBytes- 00:08:18.478 [2024-07-12 14:36:55.087387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.478 [2024-07-12 14:36:55.087414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.478 #36 NEW cov: 12177 ft: 14730 corp: 16/943b lim: 100 exec/s: 0 rss: 73Mb L: 28/86 MS: 1 InsertByte- 00:08:18.478 [2024-07-12 14:36:55.127815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.478 [2024-07-12 14:36:55.127840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.127894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.478 [2024-07-12 14:36:55.127910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.127964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.478 [2024-07-12 14:36:55.127980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.128038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.478 [2024-07-12 14:36:55.128052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.478 #37 NEW cov: 12177 ft: 14798 corp: 17/1029b lim: 100 exec/s: 37 rss: 73Mb L: 86/86 MS: 1 CopyPart- 00:08:18.478 [2024-07-12 14:36:55.177968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.478 [2024-07-12 14:36:55.177993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.178050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.478 [2024-07-12 14:36:55.178067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.178118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.478 [2024-07-12 14:36:55.178132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.178188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.478 [2024-07-12 14:36:55.178202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.478 #38 NEW cov: 12177 ft: 14844 corp: 18/1115b lim: 100 exec/s: 38 rss: 73Mb L: 86/86 MS: 1 PersAutoDict- DE: "\015\000"- 00:08:18.478 [2024-07-12 14:36:55.218094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.478 [2024-07-12 14:36:55.218120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.218169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.478 [2024-07-12 14:36:55.218184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.218237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.478 [2024-07-12 14:36:55.218251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.478 [2024-07-12 14:36:55.218305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.478 [2024-07-12 14:36:55.218320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.478 #39 NEW cov: 12177 ft: 14899 corp: 19/1208b lim: 100 exec/s: 39 rss: 73Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:08:18.735 [2024-07-12 14:36:55.267970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.735 [2024-07-12 14:36:55.268000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.735 #40 NEW cov: 12177 ft: 14918 corp: 20/1239b lim: 100 exec/s: 40 rss: 73Mb L: 31/93 MS: 1 CrossOver- 00:08:18.735 [2024-07-12 14:36:55.318284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.735 [2024-07-12 14:36:55.318310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.318359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.735 [2024-07-12 14:36:55.318373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.318428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.735 [2024-07-12 14:36:55.318442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.735 #46 NEW cov: 12177 ft: 15011 corp: 21/1312b lim: 100 exec/s: 46 rss: 73Mb L: 73/93 MS: 1 ChangeBit- 00:08:18.735 [2024-07-12 14:36:55.358183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.735 [2024-07-12 14:36:55.358209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.735 #47 NEW cov: 12177 ft: 15040 corp: 22/1343b lim: 100 exec/s: 47 rss: 73Mb L: 31/93 MS: 1 CrossOver- 00:08:18.735 [2024-07-12 14:36:55.408544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.735 [2024-07-12 14:36:55.408571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.408612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.735 [2024-07-12 14:36:55.408627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.408682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.735 [2024-07-12 14:36:55.408697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.735 #48 NEW cov: 12177 ft: 15091 corp: 23/1416b lim: 100 exec/s: 48 rss: 73Mb L: 73/93 MS: 1 ChangeBit- 00:08:18.735 [2024-07-12 14:36:55.458873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.735 [2024-07-12 14:36:55.458899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.458948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.735 [2024-07-12 14:36:55.458962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.459018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.735 [2024-07-12 14:36:55.459031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.459087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.735 [2024-07-12 14:36:55.459102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.735 #49 NEW cov: 12177 ft: 15103 corp: 24/1502b lim: 100 exec/s: 49 rss: 73Mb L: 86/93 MS: 1 ChangeBit- 00:08:18.735 [2024-07-12 14:36:55.508949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.735 [2024-07-12 14:36:55.508975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.509028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.735 [2024-07-12 14:36:55.509043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.509097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.735 [2024-07-12 14:36:55.509110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.735 [2024-07-12 14:36:55.509167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.735 [2024-07-12 14:36:55.509181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.992 #55 NEW cov: 12177 ft: 15113 corp: 25/1588b lim: 100 exec/s: 55 rss: 73Mb L: 86/93 MS: 1 PersAutoDict- DE: "\015\000"- 00:08:18.992 [2024-07-12 14:36:55.549083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.992 [2024-07-12 14:36:55.549109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.549159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.992 [2024-07-12 14:36:55.549174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.549229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.992 [2024-07-12 14:36:55.549244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.549304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.992 [2024-07-12 14:36:55.549318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.992 #56 NEW cov: 12177 ft: 15171 corp: 26/1674b lim: 100 exec/s: 56 rss: 73Mb L: 86/93 MS: 1 CrossOver- 00:08:18.992 [2024-07-12 14:36:55.599193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.992 [2024-07-12 14:36:55.599219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.599274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.992 [2024-07-12 14:36:55.599289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.599344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.992 [2024-07-12 14:36:55.599359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.599429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.992 [2024-07-12 14:36:55.599444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.992 #57 NEW cov: 12177 ft: 15202 corp: 27/1771b lim: 100 exec/s: 57 rss: 73Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:08:18.992 [2024-07-12 14:36:55.639334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.992 [2024-07-12 14:36:55.639359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.639410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.992 [2024-07-12 14:36:55.639424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.639478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:18.992 [2024-07-12 14:36:55.639492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.639552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:18.992 [2024-07-12 14:36:55.639567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.992 #58 NEW cov: 12177 ft: 15263 corp: 28/1869b lim: 100 exec/s: 58 rss: 73Mb L: 98/98 MS: 1 CopyPart- 00:08:18.992 [2024-07-12 14:36:55.679085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.992 [2024-07-12 14:36:55.679113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.992 #59 NEW cov: 12177 ft: 15281 corp: 29/1889b lim: 100 exec/s: 59 rss: 73Mb L: 20/98 MS: 1 EraseBytes- 00:08:18.992 [2024-07-12 14:36:55.719334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.992 [2024-07-12 14:36:55.719360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.992 [2024-07-12 14:36:55.719395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:18.992 [2024-07-12 14:36:55.719411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.992 #60 NEW cov: 12177 ft: 15300 corp: 30/1947b lim: 100 exec/s: 60 rss: 73Mb L: 58/98 MS: 1 EraseBytes- 00:08:18.992 [2024-07-12 14:36:55.769367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:18.992 [2024-07-12 14:36:55.769397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.249 #61 NEW cov: 12177 ft: 15338 corp: 31/1974b lim: 100 exec/s: 61 rss: 73Mb L: 27/98 MS: 1 ChangeBit- 00:08:19.249 [2024-07-12 14:36:55.809516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.249 [2024-07-12 14:36:55.809548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.249 #63 NEW cov: 12177 ft: 15339 corp: 32/1994b lim: 100 exec/s: 63 rss: 73Mb L: 20/98 MS: 2 CrossOver-CopyPart- 00:08:19.249 [2024-07-12 14:36:55.859992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.249 [2024-07-12 14:36:55.860019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:55.860077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:19.249 [2024-07-12 14:36:55.860090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:55.860145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:19.249 [2024-07-12 14:36:55.860160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:55.860228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:19.249 [2024-07-12 14:36:55.860244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.249 #64 NEW cov: 12177 ft: 15360 corp: 33/2091b lim: 100 exec/s: 64 rss: 74Mb L: 97/98 MS: 1 CopyPart- 00:08:19.249 [2024-07-12 14:36:55.910013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.249 [2024-07-12 14:36:55.910038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:55.910088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:19.249 [2024-07-12 14:36:55.910103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:55.910159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:19.249 [2024-07-12 14:36:55.910172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.249 #65 NEW cov: 12177 ft: 15372 corp: 34/2161b lim: 100 exec/s: 65 rss: 74Mb L: 70/98 MS: 1 InsertRepeatedBytes- 00:08:19.249 [2024-07-12 14:36:55.960141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.249 [2024-07-12 14:36:55.960168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:55.960207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:19.249 [2024-07-12 14:36:55.960223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:55.960278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:19.249 [2024-07-12 14:36:55.960294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.249 #66 NEW cov: 12177 ft: 15386 corp: 35/2229b lim: 100 exec/s: 66 rss: 74Mb L: 68/98 MS: 1 InsertRepeatedBytes- 00:08:19.249 [2024-07-12 14:36:56.000390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.249 [2024-07-12 14:36:56.000416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:56.000456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:19.249 [2024-07-12 14:36:56.000469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:56.000524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:19.249 [2024-07-12 14:36:56.000545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.249 [2024-07-12 14:36:56.000617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:19.249 [2024-07-12 14:36:56.000631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.249 #67 NEW cov: 12177 ft: 15402 corp: 36/2326b lim: 100 exec/s: 67 rss: 74Mb L: 97/98 MS: 1 CopyPart- 00:08:19.506 [2024-07-12 14:36:56.050398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.507 [2024-07-12 14:36:56.050425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.050462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:19.507 [2024-07-12 14:36:56.050478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.050543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:19.507 [2024-07-12 14:36:56.050574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.507 #68 NEW cov: 12177 ft: 15413 corp: 37/2399b lim: 100 exec/s: 68 rss: 74Mb L: 73/98 MS: 1 ChangeByte- 00:08:19.507 [2024-07-12 14:36:56.090638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.507 [2024-07-12 14:36:56.090664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.090713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:19.507 [2024-07-12 14:36:56.090728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.090784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:19.507 [2024-07-12 14:36:56.090799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.090853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:19.507 [2024-07-12 14:36:56.090867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.507 #69 NEW cov: 12177 ft: 15435 corp: 38/2487b lim: 100 exec/s: 69 rss: 74Mb L: 88/98 MS: 1 PersAutoDict- DE: "\015\000"- 00:08:19.507 [2024-07-12 14:36:56.130703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:19.507 [2024-07-12 14:36:56.130730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.130778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:19.507 [2024-07-12 14:36:56.130792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.130846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:19.507 [2024-07-12 14:36:56.130860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.507 [2024-07-12 14:36:56.130918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:19.507 [2024-07-12 14:36:56.130933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.507 #70 NEW cov: 12177 ft: 15439 corp: 39/2582b lim: 100 exec/s: 35 rss: 74Mb L: 95/98 MS: 1 InsertRepeatedBytes- 00:08:19.507 #70 DONE cov: 12177 ft: 15439 corp: 39/2582b lim: 100 exec/s: 35 rss: 74Mb 00:08:19.507 ###### Recommended dictionary. ###### 00:08:19.507 "\015\000" # Uses: 4 00:08:19.507 ###### End of recommended dictionary. ###### 00:08:19.507 Done 70 runs in 2 second(s) 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:19.507 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:19.764 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:19.764 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:19.765 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:19.765 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:19.765 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.765 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:19.765 14:36:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:19.765 [2024-07-12 14:36:56.331782] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:19.765 [2024-07-12 14:36:56.331855] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430405 ] 00:08:19.765 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.765 [2024-07-12 14:36:56.544151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.021 [2024-07-12 14:36:56.618128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.021 [2024-07-12 14:36:56.677338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.021 [2024-07-12 14:36:56.693535] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:20.021 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.021 INFO: Seed: 4107547403 00:08:20.021 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:20.021 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:20.021 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:20.021 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.021 #2 INITED exec/s: 0 rss: 65Mb 00:08:20.021 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.021 This may also happen if the target rejected all inputs we tried so far 00:08:20.021 [2024-07-12 14:36:56.758933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:20.022 [2024-07-12 14:36:56.758965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.022 [2024-07-12 14:36:56.759013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:20.022 [2024-07-12 14:36:56.759029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.022 [2024-07-12 14:36:56.759080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911456529954486 len:46775 00:08:20.022 [2024-07-12 14:36:56.759094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.586 NEW_FUNC[1/695]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:20.586 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.586 #13 NEW cov: 11911 ft: 11912 corp: 2/31b lim: 50 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:08:20.586 [2024-07-12 14:36:57.100176] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:20.586 [2024-07-12 14:36:57.100240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.100323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:20.586 [2024-07-12 14:36:57.100352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.100431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:20.586 [2024-07-12 14:36:57.100460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.100556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:20.586 [2024-07-12 14:36:57.100586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.586 #15 NEW cov: 12041 ft: 12787 corp: 3/78b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:20.586 [2024-07-12 14:36:57.149928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:20.586 [2024-07-12 14:36:57.149957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.150000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165893864343910070 len:46775 00:08:20.586 [2024-07-12 14:36:57.150016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.150068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911456529954486 len:46775 00:08:20.586 [2024-07-12 14:36:57.150083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.586 #16 NEW cov: 12047 ft: 13003 corp: 4/108b lim: 50 exec/s: 0 rss: 72Mb L: 30/47 MS: 1 ChangeBit- 00:08:20.586 [2024-07-12 14:36:57.200211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:20.586 [2024-07-12 14:36:57.200241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.200284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:20.586 [2024-07-12 14:36:57.200301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.200356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386548344330906 len:39579 00:08:20.586 [2024-07-12 14:36:57.200369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.200426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:20.586 [2024-07-12 14:36:57.200443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.586 #17 NEW cov: 12132 ft: 13301 corp: 5/155b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeBit- 00:08:20.586 [2024-07-12 14:36:57.250364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:20.586 [2024-07-12 14:36:57.250391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.250438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15046755949882029750 len:53457 00:08:20.586 [2024-07-12 14:36:57.250454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.250507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15046755950319947984 len:53431 00:08:20.586 [2024-07-12 14:36:57.250523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.250582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13165911456529954486 len:46775 00:08:20.586 [2024-07-12 14:36:57.250598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.586 #18 NEW cov: 12132 ft: 13405 corp: 6/200b lim: 50 exec/s: 0 rss: 72Mb L: 45/47 MS: 1 InsertRepeatedBytes- 00:08:20.586 [2024-07-12 14:36:57.290357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:20.586 [2024-07-12 14:36:57.290386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.290422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:20.586 [2024-07-12 14:36:57.290438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.290493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911087162767030 len:46775 00:08:20.586 [2024-07-12 14:36:57.290508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.586 #19 NEW cov: 12132 ft: 13522 corp: 7/230b lim: 50 exec/s: 0 rss: 72Mb L: 30/47 MS: 1 ChangeByte- 00:08:20.586 [2024-07-12 14:36:57.330457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:46775 00:08:20.586 [2024-07-12 14:36:57.330485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.330521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:20.586 [2024-07-12 14:36:57.330544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.586 [2024-07-12 14:36:57.330599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911087162767030 len:46775 00:08:20.586 [2024-07-12 14:36:57.330615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.586 #20 NEW cov: 12132 ft: 13554 corp: 8/260b lim: 50 exec/s: 0 rss: 72Mb L: 30/47 MS: 1 ChangeBit- 00:08:20.843 [2024-07-12 14:36:57.380602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:46775 00:08:20.843 [2024-07-12 14:36:57.380631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.843 [2024-07-12 14:36:57.380668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:20.843 [2024-07-12 14:36:57.380684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.843 [2024-07-12 14:36:57.380739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911065687930550 len:46775 00:08:20.843 [2024-07-12 14:36:57.380754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.843 #21 NEW cov: 12132 ft: 13700 corp: 9/290b lim: 50 exec/s: 0 rss: 72Mb L: 30/47 MS: 1 ChangeBinInt- 00:08:20.843 [2024-07-12 14:36:57.430634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:46775 00:08:20.843 [2024-07-12 14:36:57.430664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.843 [2024-07-12 14:36:57.430716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46689 00:08:20.843 [2024-07-12 14:36:57.430732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.843 #22 NEW cov: 12132 ft: 14053 corp: 10/312b lim: 50 exec/s: 0 rss: 72Mb L: 22/47 MS: 1 EraseBytes- 00:08:20.843 [2024-07-12 14:36:57.470814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:31415 00:08:20.843 [2024-07-12 14:36:57.470843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.470882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:20.844 [2024-07-12 14:36:57.470898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.470952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911065687930550 len:46775 00:08:20.844 [2024-07-12 14:36:57.470968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.844 #23 NEW cov: 12132 ft: 14143 corp: 11/342b lim: 50 exec/s: 0 rss: 73Mb L: 30/47 MS: 1 ChangeByte- 00:08:20.844 [2024-07-12 14:36:57.521074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:46775 00:08:20.844 [2024-07-12 14:36:57.521105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.521142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:20.844 [2024-07-12 14:36:57.521157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.521210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911456529954486 len:46775 00:08:20.844 [2024-07-12 14:36:57.521225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.521280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:13165911456529954486 len:46775 00:08:20.844 [2024-07-12 14:36:57.521296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.844 #24 NEW cov: 12132 ft: 14167 corp: 12/388b lim: 50 exec/s: 0 rss: 73Mb L: 46/47 MS: 1 CopyPart- 00:08:20.844 [2024-07-12 14:36:57.561073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:20.844 [2024-07-12 14:36:57.561102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.561141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:20.844 [2024-07-12 14:36:57.561158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.561214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911456529954486 len:24759 00:08:20.844 [2024-07-12 14:36:57.561227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.844 #25 NEW cov: 12132 ft: 14206 corp: 13/419b lim: 50 exec/s: 0 rss: 73Mb L: 31/47 MS: 1 CrossOver- 00:08:20.844 [2024-07-12 14:36:57.601323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:20.844 [2024-07-12 14:36:57.601352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.601392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3430224055005518490 len:39579 00:08:20.844 [2024-07-12 14:36:57.601408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.601461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:20.844 [2024-07-12 14:36:57.601476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.844 [2024-07-12 14:36:57.601534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:20.844 [2024-07-12 14:36:57.601550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.844 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:20.844 #26 NEW cov: 12155 ft: 14256 corp: 14/467b lim: 50 exec/s: 0 rss: 73Mb L: 48/48 MS: 1 InsertByte- 00:08:21.102 [2024-07-12 14:36:57.641306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13166192928620983990 len:46775 00:08:21.102 [2024-07-12 14:36:57.641334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.641373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.102 [2024-07-12 14:36:57.641389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.641442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911456529954486 len:46775 00:08:21.102 [2024-07-12 14:36:57.641458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.102 #27 NEW cov: 12155 ft: 14326 corp: 15/497b lim: 50 exec/s: 0 rss: 73Mb L: 30/48 MS: 1 ChangeBit- 00:08:21.102 [2024-07-12 14:36:57.681423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:21.102 [2024-07-12 14:36:57.681451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.681497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.102 [2024-07-12 14:36:57.681514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.681573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911456529917878 len:24759 00:08:21.102 [2024-07-12 14:36:57.681589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.102 #28 NEW cov: 12155 ft: 14337 corp: 16/528b lim: 50 exec/s: 0 rss: 73Mb L: 31/48 MS: 1 InsertByte- 00:08:21.102 [2024-07-12 14:36:57.721408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911454186651318 len:46775 00:08:21.102 [2024-07-12 14:36:57.721436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.721489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.102 [2024-07-12 14:36:57.721505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.102 #29 NEW cov: 12155 ft: 14367 corp: 17/551b lim: 50 exec/s: 29 rss: 73Mb L: 23/48 MS: 1 InsertByte- 00:08:21.102 [2024-07-12 14:36:57.771846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.771873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.771921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.771937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.771991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.772006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.772060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.772076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.102 #32 NEW cov: 12155 ft: 14459 corp: 18/594b lim: 50 exec/s: 32 rss: 73Mb L: 43/48 MS: 3 CrossOver-CrossOver-CrossOver- 00:08:21.102 [2024-07-12 14:36:57.811862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46593 00:08:21.102 [2024-07-12 14:36:57.811891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.811935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911453464534710 len:46775 00:08:21.102 [2024-07-12 14:36:57.811952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.812009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911087162767030 len:46775 00:08:21.102 [2024-07-12 14:36:57.812025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.102 #33 NEW cov: 12155 ft: 14471 corp: 19/624b lim: 50 exec/s: 33 rss: 73Mb L: 30/48 MS: 1 ChangeBinInt- 00:08:21.102 [2024-07-12 14:36:57.852060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.852088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.852135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.852152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.852209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.852226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.102 [2024-07-12 14:36:57.852278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:21.102 [2024-07-12 14:36:57.852294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.102 #34 NEW cov: 12155 ft: 14514 corp: 20/671b lim: 50 exec/s: 34 rss: 73Mb L: 47/48 MS: 1 ChangeBinInt- 00:08:21.359 [2024-07-12 14:36:57.892238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386737322891930 len:39579 00:08:21.359 [2024-07-12 14:36:57.892268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:57.892327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:21.359 [2024-07-12 14:36:57.892344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:57.892395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:21.359 [2024-07-12 14:36:57.892412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:57.892469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:21.359 [2024-07-12 14:36:57.892485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.359 #35 NEW cov: 12155 ft: 14533 corp: 21/719b lim: 50 exec/s: 35 rss: 73Mb L: 48/48 MS: 1 CrossOver- 00:08:21.359 [2024-07-12 14:36:57.932136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:31415 00:08:21.359 [2024-07-12 14:36:57.932167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:57.932204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.359 [2024-07-12 14:36:57.932220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:57.932275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911185947014838 len:23479 00:08:21.359 [2024-07-12 14:36:57.932291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.359 #36 NEW cov: 12155 ft: 14538 corp: 22/750b lim: 50 exec/s: 36 rss: 73Mb L: 31/48 MS: 1 InsertByte- 00:08:21.359 [2024-07-12 14:36:57.982090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:53238 00:08:21.359 [2024-07-12 14:36:57.982118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.359 #39 NEW cov: 12155 ft: 14861 corp: 23/760b lim: 50 exec/s: 39 rss: 73Mb L: 10/48 MS: 3 InsertByte-ChangeBinInt-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:21.359 [2024-07-12 14:36:58.022517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:21.359 [2024-07-12 14:36:58.022550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:58.022599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3430224055005518490 len:39579 00:08:21.359 [2024-07-12 14:36:58.022615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:58.022670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:21.359 [2024-07-12 14:36:58.022685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:58.022740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:21.359 [2024-07-12 14:36:58.022755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.359 #40 NEW cov: 12155 ft: 14878 corp: 24/808b lim: 50 exec/s: 40 rss: 73Mb L: 48/48 MS: 1 CopyPart- 00:08:21.359 [2024-07-12 14:36:58.072558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:21.359 [2024-07-12 14:36:58.072587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:58.072623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.359 [2024-07-12 14:36:58.072639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:58.072696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911087162767030 len:46775 00:08:21.359 [2024-07-12 14:36:58.072711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.359 #41 NEW cov: 12155 ft: 14988 corp: 25/838b lim: 50 exec/s: 41 rss: 73Mb L: 30/48 MS: 1 CopyPart- 00:08:21.359 [2024-07-12 14:36:58.112656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797691 len:46775 00:08:21.359 [2024-07-12 14:36:58.112684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:58.112723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.359 [2024-07-12 14:36:58.112739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.359 [2024-07-12 14:36:58.112792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911087162767030 len:46775 00:08:21.359 [2024-07-12 14:36:58.112808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.359 #42 NEW cov: 12155 ft: 15005 corp: 26/868b lim: 50 exec/s: 42 rss: 73Mb L: 30/48 MS: 1 ChangeByte- 00:08:21.617 [2024-07-12 14:36:58.152896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:21.617 [2024-07-12 14:36:58.152925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.152973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:21.617 [2024-07-12 14:36:58.152989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.153046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:21.617 [2024-07-12 14:36:58.153063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.153135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386737322891930 len:46775 00:08:21.617 [2024-07-12 14:36:58.153151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.617 #43 NEW cov: 12155 ft: 15030 corp: 27/915b lim: 50 exec/s: 43 rss: 73Mb L: 47/48 MS: 1 CrossOver- 00:08:21.617 [2024-07-12 14:36:58.202859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:21.617 [2024-07-12 14:36:58.202887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.202926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:21.617 [2024-07-12 14:36:58.202942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.202995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:21.617 [2024-07-12 14:36:58.203010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.617 #44 NEW cov: 12155 ft: 15045 corp: 28/950b lim: 50 exec/s: 44 rss: 73Mb L: 35/48 MS: 1 EraseBytes- 00:08:21.617 [2024-07-12 14:36:58.242823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:46775 00:08:21.617 [2024-07-12 14:36:58.242851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.617 #45 NEW cov: 12155 ft: 15105 corp: 29/963b lim: 50 exec/s: 45 rss: 73Mb L: 13/48 MS: 1 EraseBytes- 00:08:21.617 [2024-07-12 14:36:58.282923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:46775 00:08:21.617 [2024-07-12 14:36:58.282950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.617 #46 NEW cov: 12155 ft: 15129 corp: 30/976b lim: 50 exec/s: 46 rss: 73Mb L: 13/48 MS: 1 ChangeBinInt- 00:08:21.617 [2024-07-12 14:36:58.333186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644273334 len:46775 00:08:21.617 [2024-07-12 14:36:58.333214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.333256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:24759 00:08:21.617 [2024-07-12 14:36:58.333271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.617 #47 NEW cov: 12155 ft: 15216 corp: 31/997b lim: 50 exec/s: 47 rss: 73Mb L: 21/48 MS: 1 EraseBytes- 00:08:21.617 [2024-07-12 14:36:58.383541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:31415 00:08:21.617 [2024-07-12 14:36:58.383568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.383619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.617 [2024-07-12 14:36:58.383635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.383690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3065380864 len:1 00:08:21.617 [2024-07-12 14:36:58.383706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.617 [2024-07-12 14:36:58.383758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:46775 00:08:21.617 [2024-07-12 14:36:58.383772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.875 #48 NEW cov: 12155 ft: 15233 corp: 32/1044b lim: 50 exec/s: 48 rss: 73Mb L: 47/48 MS: 1 InsertRepeatedBytes- 00:08:21.875 [2024-07-12 14:36:58.433611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386617063807642 len:39579 00:08:21.875 [2024-07-12 14:36:58.433639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.433678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:21.875 [2024-07-12 14:36:58.433695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.433749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:21.875 [2024-07-12 14:36:58.433765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.875 #49 NEW cov: 12155 ft: 15319 corp: 33/1079b lim: 50 exec/s: 49 rss: 74Mb L: 35/48 MS: 1 ChangeByte- 00:08:21.875 [2024-07-12 14:36:58.483713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797691 len:36791 00:08:21.875 [2024-07-12 14:36:58.483741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.483786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.875 [2024-07-12 14:36:58.483802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.483856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13165911456529954486 len:24759 00:08:21.875 [2024-07-12 14:36:58.483875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.875 #50 NEW cov: 12155 ft: 15327 corp: 34/1110b lim: 50 exec/s: 50 rss: 74Mb L: 31/48 MS: 1 InsertByte- 00:08:21.875 [2024-07-12 14:36:58.533750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911454186651318 len:46775 00:08:21.875 [2024-07-12 14:36:58.533778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.533815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.875 [2024-07-12 14:36:58.533832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.875 #51 NEW cov: 12155 ft: 15336 corp: 35/1133b lim: 50 exec/s: 51 rss: 74Mb L: 23/48 MS: 1 ShuffleBytes- 00:08:21.875 [2024-07-12 14:36:58.584122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13165911453644797622 len:31415 00:08:21.875 [2024-07-12 14:36:58.584150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.584195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 00:08:21.875 [2024-07-12 14:36:58.584211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.584264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3065380864 len:1 00:08:21.875 [2024-07-12 14:36:58.584280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.584335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:46775 00:08:21.875 [2024-07-12 14:36:58.584350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.875 #52 NEW cov: 12155 ft: 15355 corp: 36/1180b lim: 50 exec/s: 52 rss: 74Mb L: 47/48 MS: 1 CrossOver- 00:08:21.875 [2024-07-12 14:36:58.634275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:21.875 [2024-07-12 14:36:58.634304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.634353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:21.875 [2024-07-12 14:36:58.634369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.634422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:21.875 [2024-07-12 14:36:58.634438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.875 [2024-07-12 14:36:58.634491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:21.875 [2024-07-12 14:36:58.634508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.875 #54 NEW cov: 12155 ft: 15379 corp: 37/1225b lim: 50 exec/s: 54 rss: 74Mb L: 45/48 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:22.134 [2024-07-12 14:36:58.674062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13186539704830394294 len:65536 00:08:22.134 [2024-07-12 14:36:58.674092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.134 #58 NEW cov: 12155 ft: 15382 corp: 38/1236b lim: 50 exec/s: 58 rss: 74Mb L: 11/48 MS: 4 PersAutoDict-PersAutoDict-ChangeBinInt-CrossOver- DE: "\377\377\377\377\377\377\377\377"-"\377\377\377\377\377\377\377\377"- 00:08:22.134 [2024-07-12 14:36:58.714470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11140386737322891930 len:39579 00:08:22.134 [2024-07-12 14:36:58.714498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.134 [2024-07-12 14:36:58.714570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 00:08:22.134 [2024-07-12 14:36:58.714588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.134 [2024-07-12 14:36:58.714641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 00:08:22.134 [2024-07-12 14:36:58.714656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.134 [2024-07-12 14:36:58.714711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 00:08:22.134 [2024-07-12 14:36:58.714726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.134 #59 NEW cov: 12155 ft: 15393 corp: 39/1284b lim: 50 exec/s: 29 rss: 74Mb L: 48/48 MS: 1 ShuffleBytes- 00:08:22.134 #59 DONE cov: 12155 ft: 15393 corp: 39/1284b lim: 50 exec/s: 29 rss: 74Mb 00:08:22.134 ###### Recommended dictionary. ###### 00:08:22.134 "\377\377\377\377\377\377\377\377" # Uses: 2 00:08:22.134 ###### End of recommended dictionary. ###### 00:08:22.134 Done 59 runs in 2 second(s) 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:22.134 14:36:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:22.392 [2024-07-12 14:36:58.933892] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:22.392 [2024-07-12 14:36:58.933965] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430722 ] 00:08:22.392 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.392 [2024-07-12 14:36:59.153896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.650 [2024-07-12 14:36:59.231363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.650 [2024-07-12 14:36:59.291041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.650 [2024-07-12 14:36:59.307243] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:22.650 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.650 INFO: Seed: 2426579862 00:08:22.650 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:22.651 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:22.651 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:22.651 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.651 #2 INITED exec/s: 0 rss: 65Mb 00:08:22.651 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.651 This may also happen if the target rejected all inputs we tried so far 00:08:22.651 [2024-07-12 14:36:59.372578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:22.651 [2024-07-12 14:36:59.372611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.937 NEW_FUNC[1/697]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:22.937 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:22.937 #6 NEW cov: 11969 ft: 11970 corp: 2/35b lim: 90 exec/s: 0 rss: 72Mb L: 34/34 MS: 4 ChangeBit-CrossOver-CopyPart-InsertRepeatedBytes- 00:08:22.937 [2024-07-12 14:36:59.713550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:22.937 [2024-07-12 14:36:59.713617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.194 #7 NEW cov: 12099 ft: 12685 corp: 3/69b lim: 90 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:08:23.194 [2024-07-12 14:36:59.773451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.194 [2024-07-12 14:36:59.773480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.194 #8 NEW cov: 12105 ft: 12962 corp: 4/97b lim: 90 exec/s: 0 rss: 72Mb L: 28/34 MS: 1 EraseBytes- 00:08:23.194 [2024-07-12 14:36:59.823623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.194 [2024-07-12 14:36:59.823650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.194 #9 NEW cov: 12190 ft: 13294 corp: 5/119b lim: 90 exec/s: 0 rss: 72Mb L: 22/34 MS: 1 EraseBytes- 00:08:23.194 [2024-07-12 14:36:59.873727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.195 [2024-07-12 14:36:59.873754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.195 #10 NEW cov: 12190 ft: 13474 corp: 6/153b lim: 90 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:08:23.195 [2024-07-12 14:36:59.914018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.195 [2024-07-12 14:36:59.914048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.195 #11 NEW cov: 12190 ft: 13525 corp: 7/181b lim: 90 exec/s: 0 rss: 72Mb L: 28/34 MS: 1 CopyPart- 00:08:23.195 [2024-07-12 14:36:59.954142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.195 [2024-07-12 14:36:59.954169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.195 [2024-07-12 14:36:59.954222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.195 [2024-07-12 14:36:59.954238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.195 #12 NEW cov: 12190 ft: 14356 corp: 8/221b lim: 90 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:08:23.452 [2024-07-12 14:36:59.994257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.452 [2024-07-12 14:36:59.994288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.452 [2024-07-12 14:36:59.994346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.452 [2024-07-12 14:36:59.994364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.452 #13 NEW cov: 12190 ft: 14397 corp: 9/261b lim: 90 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:23.452 [2024-07-12 14:37:00.044470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.452 [2024-07-12 14:37:00.044521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.452 #14 NEW cov: 12190 ft: 14490 corp: 10/295b lim: 90 exec/s: 0 rss: 73Mb L: 34/40 MS: 1 CrossOver- 00:08:23.452 [2024-07-12 14:37:00.084523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.452 [2024-07-12 14:37:00.084558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.452 [2024-07-12 14:37:00.084617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.452 [2024-07-12 14:37:00.084634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.452 #15 NEW cov: 12190 ft: 14514 corp: 11/341b lim: 90 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 CopyPart- 00:08:23.452 [2024-07-12 14:37:00.124492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.452 [2024-07-12 14:37:00.124519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.452 #16 NEW cov: 12190 ft: 14547 corp: 12/375b lim: 90 exec/s: 0 rss: 73Mb L: 34/46 MS: 1 ChangeBit- 00:08:23.452 [2024-07-12 14:37:00.164600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.452 [2024-07-12 14:37:00.164628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.452 #17 NEW cov: 12190 ft: 14595 corp: 13/397b lim: 90 exec/s: 0 rss: 73Mb L: 22/46 MS: 1 CopyPart- 00:08:23.452 [2024-07-12 14:37:00.214740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.452 [2024-07-12 14:37:00.214768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.710 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:23.710 #18 NEW cov: 12213 ft: 14689 corp: 14/419b lim: 90 exec/s: 0 rss: 73Mb L: 22/46 MS: 1 ShuffleBytes- 00:08:23.710 [2024-07-12 14:37:00.264868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.710 [2024-07-12 14:37:00.264898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.710 #19 NEW cov: 12213 ft: 14696 corp: 15/450b lim: 90 exec/s: 0 rss: 73Mb L: 31/46 MS: 1 CopyPart- 00:08:23.710 [2024-07-12 14:37:00.304956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.710 [2024-07-12 14:37:00.304984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.710 #20 NEW cov: 12213 ft: 14721 corp: 16/472b lim: 90 exec/s: 0 rss: 73Mb L: 22/46 MS: 1 ChangeBinInt- 00:08:23.710 [2024-07-12 14:37:00.345184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.710 [2024-07-12 14:37:00.345211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.710 [2024-07-12 14:37:00.345256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.710 [2024-07-12 14:37:00.345272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.710 #21 NEW cov: 12213 ft: 14728 corp: 17/512b lim: 90 exec/s: 21 rss: 73Mb L: 40/46 MS: 1 ChangeByte- 00:08:23.710 [2024-07-12 14:37:00.385388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.710 [2024-07-12 14:37:00.385415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.710 [2024-07-12 14:37:00.385472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.710 [2024-07-12 14:37:00.385486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.710 #22 NEW cov: 12213 ft: 14745 corp: 18/559b lim: 90 exec/s: 22 rss: 73Mb L: 47/47 MS: 1 InsertByte- 00:08:23.710 [2024-07-12 14:37:00.435363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.710 [2024-07-12 14:37:00.435391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.710 #24 NEW cov: 12213 ft: 14763 corp: 19/582b lim: 90 exec/s: 24 rss: 73Mb L: 23/47 MS: 2 ChangeByte-CrossOver- 00:08:23.710 [2024-07-12 14:37:00.475594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.710 [2024-07-12 14:37:00.475621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.710 [2024-07-12 14:37:00.475666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.710 [2024-07-12 14:37:00.475682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.967 #30 NEW cov: 12213 ft: 14804 corp: 20/622b lim: 90 exec/s: 30 rss: 73Mb L: 40/47 MS: 1 CrossOver- 00:08:23.967 [2024-07-12 14:37:00.525589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.967 [2024-07-12 14:37:00.525617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.967 #31 NEW cov: 12213 ft: 14829 corp: 21/656b lim: 90 exec/s: 31 rss: 73Mb L: 34/47 MS: 1 ChangeByte- 00:08:23.967 [2024-07-12 14:37:00.575745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.967 [2024-07-12 14:37:00.575772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.967 #32 NEW cov: 12213 ft: 14839 corp: 22/684b lim: 90 exec/s: 32 rss: 73Mb L: 28/47 MS: 1 ChangeByte- 00:08:23.967 [2024-07-12 14:37:00.615961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.967 [2024-07-12 14:37:00.615991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.967 [2024-07-12 14:37:00.616049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.967 [2024-07-12 14:37:00.616065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.967 #33 NEW cov: 12213 ft: 14861 corp: 23/729b lim: 90 exec/s: 33 rss: 73Mb L: 45/47 MS: 1 InsertRepeatedBytes- 00:08:23.967 [2024-07-12 14:37:00.656138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.967 [2024-07-12 14:37:00.656164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.967 [2024-07-12 14:37:00.656233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.967 [2024-07-12 14:37:00.656248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.967 #35 NEW cov: 12213 ft: 14881 corp: 24/781b lim: 90 exec/s: 35 rss: 73Mb L: 52/52 MS: 2 EraseBytes-InsertRepeatedBytes- 00:08:23.967 [2024-07-12 14:37:00.696242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.967 [2024-07-12 14:37:00.696269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.967 [2024-07-12 14:37:00.696320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:23.967 [2024-07-12 14:37:00.696336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.968 #36 NEW cov: 12213 ft: 14920 corp: 25/830b lim: 90 exec/s: 36 rss: 73Mb L: 49/52 MS: 1 InsertRepeatedBytes- 00:08:23.968 [2024-07-12 14:37:00.746216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:23.968 [2024-07-12 14:37:00.746243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.225 #37 NEW cov: 12213 ft: 14986 corp: 26/863b lim: 90 exec/s: 37 rss: 74Mb L: 33/52 MS: 1 EraseBytes- 00:08:24.225 [2024-07-12 14:37:00.796486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.225 [2024-07-12 14:37:00.796512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.225 [2024-07-12 14:37:00.796571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.225 [2024-07-12 14:37:00.796586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.225 #38 NEW cov: 12213 ft: 15006 corp: 27/909b lim: 90 exec/s: 38 rss: 74Mb L: 46/52 MS: 1 ShuffleBytes- 00:08:24.225 [2024-07-12 14:37:00.836653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.225 [2024-07-12 14:37:00.836680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.225 [2024-07-12 14:37:00.836733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.225 [2024-07-12 14:37:00.836749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.225 #39 NEW cov: 12213 ft: 15018 corp: 28/949b lim: 90 exec/s: 39 rss: 74Mb L: 40/52 MS: 1 ChangeBit- 00:08:24.225 [2024-07-12 14:37:00.876726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.225 [2024-07-12 14:37:00.876752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.225 [2024-07-12 14:37:00.876828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.225 [2024-07-12 14:37:00.876845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.225 #40 NEW cov: 12213 ft: 15025 corp: 29/985b lim: 90 exec/s: 40 rss: 74Mb L: 36/52 MS: 1 EraseBytes- 00:08:24.225 [2024-07-12 14:37:00.916857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.225 [2024-07-12 14:37:00.916882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.225 [2024-07-12 14:37:00.916938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.225 [2024-07-12 14:37:00.916954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.225 #41 NEW cov: 12213 ft: 15029 corp: 30/1024b lim: 90 exec/s: 41 rss: 74Mb L: 39/52 MS: 1 EraseBytes- 00:08:24.225 [2024-07-12 14:37:00.956776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.225 [2024-07-12 14:37:00.956803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.225 #42 NEW cov: 12213 ft: 15068 corp: 31/1059b lim: 90 exec/s: 42 rss: 74Mb L: 35/52 MS: 1 InsertByte- 00:08:24.225 [2024-07-12 14:37:00.996945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.225 [2024-07-12 14:37:00.996972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.483 #43 NEW cov: 12213 ft: 15080 corp: 32/1081b lim: 90 exec/s: 43 rss: 74Mb L: 22/52 MS: 1 ChangeBinInt- 00:08:24.483 [2024-07-12 14:37:01.037049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.483 [2024-07-12 14:37:01.037075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.483 #44 NEW cov: 12213 ft: 15095 corp: 33/1115b lim: 90 exec/s: 44 rss: 74Mb L: 34/52 MS: 1 ChangeBinInt- 00:08:24.483 [2024-07-12 14:37:01.077126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.483 [2024-07-12 14:37:01.077152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.483 #45 NEW cov: 12213 ft: 15104 corp: 34/1137b lim: 90 exec/s: 45 rss: 74Mb L: 22/52 MS: 1 ChangeBinInt- 00:08:24.483 [2024-07-12 14:37:01.127408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.483 [2024-07-12 14:37:01.127436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.484 [2024-07-12 14:37:01.127476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.484 [2024-07-12 14:37:01.127492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.484 #46 NEW cov: 12213 ft: 15158 corp: 35/1183b lim: 90 exec/s: 46 rss: 74Mb L: 46/52 MS: 1 ChangeByte- 00:08:24.484 [2024-07-12 14:37:01.167699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.484 [2024-07-12 14:37:01.167727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.484 [2024-07-12 14:37:01.167781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.484 [2024-07-12 14:37:01.167796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.484 [2024-07-12 14:37:01.167855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:24.484 [2024-07-12 14:37:01.167874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.484 #47 NEW cov: 12213 ft: 15482 corp: 36/1254b lim: 90 exec/s: 47 rss: 74Mb L: 71/71 MS: 1 CopyPart- 00:08:24.484 [2024-07-12 14:37:01.217664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.484 [2024-07-12 14:37:01.217690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.484 [2024-07-12 14:37:01.217743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.484 [2024-07-12 14:37:01.217760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.484 #48 NEW cov: 12213 ft: 15488 corp: 37/1306b lim: 90 exec/s: 48 rss: 74Mb L: 52/71 MS: 1 InsertRepeatedBytes- 00:08:24.484 [2024-07-12 14:37:01.257637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.484 [2024-07-12 14:37:01.257664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.741 #49 NEW cov: 12213 ft: 15508 corp: 38/1334b lim: 90 exec/s: 49 rss: 74Mb L: 28/71 MS: 1 CopyPart- 00:08:24.741 [2024-07-12 14:37:01.307784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.741 [2024-07-12 14:37:01.307812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.741 #50 NEW cov: 12213 ft: 15509 corp: 39/1368b lim: 90 exec/s: 50 rss: 74Mb L: 34/71 MS: 1 ChangeByte- 00:08:24.741 [2024-07-12 14:37:01.348055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:24.741 [2024-07-12 14:37:01.348082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.741 [2024-07-12 14:37:01.348139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:24.742 [2024-07-12 14:37:01.348155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.742 #51 NEW cov: 12213 ft: 15562 corp: 40/1419b lim: 90 exec/s: 25 rss: 74Mb L: 51/71 MS: 1 CrossOver- 00:08:24.742 #51 DONE cov: 12213 ft: 15562 corp: 40/1419b lim: 90 exec/s: 25 rss: 74Mb 00:08:24.742 Done 51 runs in 2 second(s) 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:24.742 14:37:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:25.000 [2024-07-12 14:37:01.554890] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:25.000 [2024-07-12 14:37:01.554963] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431040 ] 00:08:25.000 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.000 [2024-07-12 14:37:01.774059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.259 [2024-07-12 14:37:01.848097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.259 [2024-07-12 14:37:01.907512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.259 [2024-07-12 14:37:01.923708] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:25.259 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.259 INFO: Seed: 746621412 00:08:25.259 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:25.259 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:25.259 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:25.259 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.259 #2 INITED exec/s: 0 rss: 64Mb 00:08:25.259 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.259 This may also happen if the target rejected all inputs we tried so far 00:08:25.259 [2024-07-12 14:37:01.982768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.259 [2024-07-12 14:37:01.982798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.823 NEW_FUNC[1/697]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:25.823 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:25.823 #5 NEW cov: 11944 ft: 11945 corp: 2/12b lim: 50 exec/s: 0 rss: 72Mb L: 11/11 MS: 3 CMP-EraseBytes-CopyPart- DE: "\363\217\211\025o}%\000"- 00:08:25.823 [2024-07-12 14:37:02.323838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.823 [2024-07-12 14:37:02.323902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.823 #8 NEW cov: 12074 ft: 12602 corp: 3/23b lim: 50 exec/s: 0 rss: 72Mb L: 11/11 MS: 3 EraseBytes-ChangeByte-CMP- DE: "\017\000"- 00:08:25.823 [2024-07-12 14:37:02.383738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.823 [2024-07-12 14:37:02.383766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.823 #9 NEW cov: 12080 ft: 12808 corp: 4/34b lim: 50 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 ChangeBinInt- 00:08:25.823 [2024-07-12 14:37:02.433881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.823 [2024-07-12 14:37:02.433911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.823 #10 NEW cov: 12165 ft: 13079 corp: 5/45b lim: 50 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 CrossOver- 00:08:25.823 [2024-07-12 14:37:02.474011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.823 [2024-07-12 14:37:02.474037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.823 #11 NEW cov: 12165 ft: 13146 corp: 6/56b lim: 50 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 ChangeByte- 00:08:25.823 [2024-07-12 14:37:02.524310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.823 [2024-07-12 14:37:02.524337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.823 [2024-07-12 14:37:02.524394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:25.823 [2024-07-12 14:37:02.524408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.823 #12 NEW cov: 12165 ft: 13962 corp: 7/76b lim: 50 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:25.823 [2024-07-12 14:37:02.564234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.823 [2024-07-12 14:37:02.564261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.823 #13 NEW cov: 12165 ft: 14070 corp: 8/87b lim: 50 exec/s: 0 rss: 72Mb L: 11/20 MS: 1 PersAutoDict- DE: "\363\217\211\025o}%\000"- 00:08:25.823 [2024-07-12 14:37:02.604340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:25.823 [2024-07-12 14:37:02.604366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.080 #14 NEW cov: 12165 ft: 14093 corp: 9/98b lim: 50 exec/s: 0 rss: 72Mb L: 11/20 MS: 1 ChangeBit- 00:08:26.081 [2024-07-12 14:37:02.654466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.081 [2024-07-12 14:37:02.654493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.081 #15 NEW cov: 12165 ft: 14108 corp: 10/109b lim: 50 exec/s: 0 rss: 73Mb L: 11/20 MS: 1 ChangeByte- 00:08:26.081 [2024-07-12 14:37:02.704622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.081 [2024-07-12 14:37:02.704649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.081 #16 NEW cov: 12165 ft: 14142 corp: 11/126b lim: 50 exec/s: 0 rss: 73Mb L: 17/20 MS: 1 CopyPart- 00:08:26.081 [2024-07-12 14:37:02.744916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.081 [2024-07-12 14:37:02.744944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.081 [2024-07-12 14:37:02.744997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.081 [2024-07-12 14:37:02.745012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.081 #17 NEW cov: 12165 ft: 14234 corp: 12/147b lim: 50 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:08:26.081 [2024-07-12 14:37:02.784976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.081 [2024-07-12 14:37:02.785004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.081 [2024-07-12 14:37:02.785059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.081 [2024-07-12 14:37:02.785076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.081 #18 NEW cov: 12165 ft: 14292 corp: 13/167b lim: 50 exec/s: 0 rss: 73Mb L: 20/21 MS: 1 ShuffleBytes- 00:08:26.081 [2024-07-12 14:37:02.835125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.081 [2024-07-12 14:37:02.835152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.081 [2024-07-12 14:37:02.835206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.081 [2024-07-12 14:37:02.835222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.338 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:26.338 #19 NEW cov: 12188 ft: 14361 corp: 14/188b lim: 50 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 InsertByte- 00:08:26.338 [2024-07-12 14:37:02.895157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.338 [2024-07-12 14:37:02.895185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.338 #20 NEW cov: 12188 ft: 14370 corp: 15/199b lim: 50 exec/s: 0 rss: 73Mb L: 11/21 MS: 1 ShuffleBytes- 00:08:26.338 [2024-07-12 14:37:02.935429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.338 [2024-07-12 14:37:02.935458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.338 [2024-07-12 14:37:02.935514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.338 [2024-07-12 14:37:02.935533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.338 #26 NEW cov: 12188 ft: 14415 corp: 16/220b lim: 50 exec/s: 26 rss: 73Mb L: 21/21 MS: 1 ShuffleBytes- 00:08:26.338 [2024-07-12 14:37:02.985460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.338 [2024-07-12 14:37:02.985488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.338 #27 NEW cov: 12188 ft: 14456 corp: 17/232b lim: 50 exec/s: 27 rss: 73Mb L: 12/21 MS: 1 InsertByte- 00:08:26.338 [2024-07-12 14:37:03.025542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.338 [2024-07-12 14:37:03.025571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.338 #28 NEW cov: 12188 ft: 14489 corp: 18/243b lim: 50 exec/s: 28 rss: 73Mb L: 11/21 MS: 1 PersAutoDict- DE: "\017\000"- 00:08:26.338 [2024-07-12 14:37:03.075870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.338 [2024-07-12 14:37:03.075898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.338 [2024-07-12 14:37:03.075942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.338 [2024-07-12 14:37:03.075959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.338 #29 NEW cov: 12188 ft: 14533 corp: 19/264b lim: 50 exec/s: 29 rss: 73Mb L: 21/21 MS: 1 ChangeBinInt- 00:08:26.339 [2024-07-12 14:37:03.125836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.339 [2024-07-12 14:37:03.125864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.596 #33 NEW cov: 12188 ft: 14550 corp: 20/282b lim: 50 exec/s: 33 rss: 73Mb L: 18/21 MS: 4 EraseBytes-ChangeByte-CopyPart-CrossOver- 00:08:26.596 [2024-07-12 14:37:03.165944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.596 [2024-07-12 14:37:03.165970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.596 #34 NEW cov: 12188 ft: 14560 corp: 21/294b lim: 50 exec/s: 34 rss: 73Mb L: 12/21 MS: 1 CrossOver- 00:08:26.596 [2024-07-12 14:37:03.216199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.596 [2024-07-12 14:37:03.216226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.596 [2024-07-12 14:37:03.216265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.596 [2024-07-12 14:37:03.216281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.596 #35 NEW cov: 12188 ft: 14571 corp: 22/316b lim: 50 exec/s: 35 rss: 73Mb L: 22/22 MS: 1 PersAutoDict- DE: "\017\000"- 00:08:26.596 [2024-07-12 14:37:03.256175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.596 [2024-07-12 14:37:03.256202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.596 #36 NEW cov: 12188 ft: 14582 corp: 23/327b lim: 50 exec/s: 36 rss: 73Mb L: 11/22 MS: 1 ChangeByte- 00:08:26.596 [2024-07-12 14:37:03.296289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.596 [2024-07-12 14:37:03.296316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.596 #37 NEW cov: 12188 ft: 14589 corp: 24/345b lim: 50 exec/s: 37 rss: 74Mb L: 18/22 MS: 1 ChangeByte- 00:08:26.596 [2024-07-12 14:37:03.346430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.596 [2024-07-12 14:37:03.346457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.596 #38 NEW cov: 12188 ft: 14591 corp: 25/356b lim: 50 exec/s: 38 rss: 74Mb L: 11/22 MS: 1 ChangeBinInt- 00:08:26.854 [2024-07-12 14:37:03.396718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.854 [2024-07-12 14:37:03.396746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.854 [2024-07-12 14:37:03.396813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.854 [2024-07-12 14:37:03.396829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.854 #39 NEW cov: 12188 ft: 14612 corp: 26/379b lim: 50 exec/s: 39 rss: 74Mb L: 23/23 MS: 1 CrossOver- 00:08:26.854 [2024-07-12 14:37:03.446727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.854 [2024-07-12 14:37:03.446754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.854 #40 NEW cov: 12188 ft: 14623 corp: 27/396b lim: 50 exec/s: 40 rss: 74Mb L: 17/23 MS: 1 CrossOver- 00:08:26.854 [2024-07-12 14:37:03.496848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.854 [2024-07-12 14:37:03.496875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.854 #41 NEW cov: 12188 ft: 14633 corp: 28/408b lim: 50 exec/s: 41 rss: 74Mb L: 12/23 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:26.854 [2024-07-12 14:37:03.546982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.854 [2024-07-12 14:37:03.547009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.854 #42 NEW cov: 12188 ft: 14647 corp: 29/419b lim: 50 exec/s: 42 rss: 74Mb L: 11/23 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:08:26.854 [2024-07-12 14:37:03.587273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.854 [2024-07-12 14:37:03.587304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.854 [2024-07-12 14:37:03.587354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.854 [2024-07-12 14:37:03.587371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.854 #43 NEW cov: 12188 ft: 14663 corp: 30/441b lim: 50 exec/s: 43 rss: 74Mb L: 22/23 MS: 1 ChangeBinInt- 00:08:26.854 [2024-07-12 14:37:03.637393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:26.854 [2024-07-12 14:37:03.637420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.854 [2024-07-12 14:37:03.637469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:26.854 [2024-07-12 14:37:03.637488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.112 #44 NEW cov: 12188 ft: 14665 corp: 31/463b lim: 50 exec/s: 44 rss: 74Mb L: 22/23 MS: 1 PersAutoDict- DE: "\017\000"- 00:08:27.112 [2024-07-12 14:37:03.687553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:27.112 [2024-07-12 14:37:03.687579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.112 [2024-07-12 14:37:03.687633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:27.112 [2024-07-12 14:37:03.687650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.112 #45 NEW cov: 12188 ft: 14695 corp: 32/486b lim: 50 exec/s: 45 rss: 74Mb L: 23/23 MS: 1 ChangeBit- 00:08:27.112 [2024-07-12 14:37:03.737565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:27.112 [2024-07-12 14:37:03.737592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.112 #46 NEW cov: 12188 ft: 14722 corp: 33/497b lim: 50 exec/s: 46 rss: 74Mb L: 11/23 MS: 1 ChangeBinInt- 00:08:27.112 [2024-07-12 14:37:03.787841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:27.112 [2024-07-12 14:37:03.787868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.112 [2024-07-12 14:37:03.787918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:27.112 [2024-07-12 14:37:03.787934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.112 #47 NEW cov: 12188 ft: 14733 corp: 34/519b lim: 50 exec/s: 47 rss: 74Mb L: 22/23 MS: 1 ChangeBit- 00:08:27.112 [2024-07-12 14:37:03.827802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:27.112 [2024-07-12 14:37:03.827828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.112 #48 NEW cov: 12188 ft: 14790 corp: 35/530b lim: 50 exec/s: 48 rss: 74Mb L: 11/23 MS: 1 ChangeBit- 00:08:27.112 [2024-07-12 14:37:03.868059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:27.112 [2024-07-12 14:37:03.868086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.112 [2024-07-12 14:37:03.868137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:27.112 [2024-07-12 14:37:03.868152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.112 #54 NEW cov: 12188 ft: 14845 corp: 36/552b lim: 50 exec/s: 54 rss: 74Mb L: 22/23 MS: 1 ChangeByte- 00:08:27.372 [2024-07-12 14:37:03.908040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:27.372 [2024-07-12 14:37:03.908068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.372 #55 NEW cov: 12188 ft: 14913 corp: 37/563b lim: 50 exec/s: 55 rss: 75Mb L: 11/23 MS: 1 EraseBytes- 00:08:27.372 [2024-07-12 14:37:03.958172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:27.372 [2024-07-12 14:37:03.958198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.372 #56 NEW cov: 12188 ft: 14932 corp: 38/573b lim: 50 exec/s: 28 rss: 75Mb L: 10/23 MS: 1 EraseBytes- 00:08:27.372 #56 DONE cov: 12188 ft: 14932 corp: 38/573b lim: 50 exec/s: 28 rss: 75Mb 00:08:27.372 ###### Recommended dictionary. ###### 00:08:27.372 "\363\217\211\025o}%\000" # Uses: 1 00:08:27.372 "\017\000" # Uses: 3 00:08:27.372 "\000\000\000\000" # Uses: 1 00:08:27.372 ###### End of recommended dictionary. ###### 00:08:27.372 Done 56 runs in 2 second(s) 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:27.372 14:37:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:27.631 [2024-07-12 14:37:04.177778] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:27.631 [2024-07-12 14:37:04.177850] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431341 ] 00:08:27.631 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.631 [2024-07-12 14:37:04.398605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.889 [2024-07-12 14:37:04.475199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.889 [2024-07-12 14:37:04.534521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.889 [2024-07-12 14:37:04.550717] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:27.889 INFO: Running with entropic power schedule (0xFF, 100). 00:08:27.889 INFO: Seed: 3375597148 00:08:27.889 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:27.889 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:27.889 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:27.889 INFO: A corpus is not provided, starting from an empty corpus 00:08:27.889 #2 INITED exec/s: 0 rss: 64Mb 00:08:27.889 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:27.889 This may also happen if the target rejected all inputs we tried so far 00:08:27.889 [2024-07-12 14:37:04.616475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:27.889 [2024-07-12 14:37:04.616507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.889 [2024-07-12 14:37:04.616550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:27.889 [2024-07-12 14:37:04.616566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.889 [2024-07-12 14:37:04.616622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:27.889 [2024-07-12 14:37:04.616639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.889 [2024-07-12 14:37:04.616696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:27.889 [2024-07-12 14:37:04.616712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.455 NEW_FUNC[1/697]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:28.455 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:28.455 #30 NEW cov: 11970 ft: 11963 corp: 2/81b lim: 85 exec/s: 0 rss: 72Mb L: 80/80 MS: 3 CrossOver-InsertByte-InsertRepeatedBytes- 00:08:28.455 [2024-07-12 14:37:04.957171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.455 [2024-07-12 14:37:04.957226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.455 [2024-07-12 14:37:04.957298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.455 [2024-07-12 14:37:04.957322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.455 [2024-07-12 14:37:04.957389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.455 [2024-07-12 14:37:04.957413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.455 #32 NEW cov: 12100 ft: 12930 corp: 3/133b lim: 85 exec/s: 0 rss: 72Mb L: 52/80 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:28.455 [2024-07-12 14:37:05.007094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.455 [2024-07-12 14:37:05.007122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.455 [2024-07-12 14:37:05.007160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.455 [2024-07-12 14:37:05.007175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.455 [2024-07-12 14:37:05.007230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.455 [2024-07-12 14:37:05.007244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.455 #38 NEW cov: 12106 ft: 13126 corp: 4/200b lim: 85 exec/s: 0 rss: 72Mb L: 67/80 MS: 1 CrossOver- 00:08:28.455 [2024-07-12 14:37:05.057238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.455 [2024-07-12 14:37:05.057265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.455 [2024-07-12 14:37:05.057303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.455 [2024-07-12 14:37:05.057318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.455 [2024-07-12 14:37:05.057373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.455 [2024-07-12 14:37:05.057388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.455 #39 NEW cov: 12191 ft: 13527 corp: 5/252b lim: 85 exec/s: 0 rss: 72Mb L: 52/80 MS: 1 ChangeBit- 00:08:28.456 [2024-07-12 14:37:05.097344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.456 [2024-07-12 14:37:05.097370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.097415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.456 [2024-07-12 14:37:05.097430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.097483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.456 [2024-07-12 14:37:05.097499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.456 #40 NEW cov: 12191 ft: 13597 corp: 6/304b lim: 85 exec/s: 0 rss: 72Mb L: 52/80 MS: 1 ChangeBit- 00:08:28.456 [2024-07-12 14:37:05.137651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.456 [2024-07-12 14:37:05.137677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.137721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.456 [2024-07-12 14:37:05.137737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.137789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.456 [2024-07-12 14:37:05.137804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.137856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:28.456 [2024-07-12 14:37:05.137871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.456 #41 NEW cov: 12191 ft: 13629 corp: 7/386b lim: 85 exec/s: 0 rss: 72Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:08:28.456 [2024-07-12 14:37:05.177588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.456 [2024-07-12 14:37:05.177614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.177685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.456 [2024-07-12 14:37:05.177704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.177754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.456 [2024-07-12 14:37:05.177769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.456 #42 NEW cov: 12191 ft: 13787 corp: 8/438b lim: 85 exec/s: 0 rss: 72Mb L: 52/82 MS: 1 ChangeByte- 00:08:28.456 [2024-07-12 14:37:05.227896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.456 [2024-07-12 14:37:05.227923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.227970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.456 [2024-07-12 14:37:05.227986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.228035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.456 [2024-07-12 14:37:05.228067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.456 [2024-07-12 14:37:05.228120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:28.456 [2024-07-12 14:37:05.228135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.714 #43 NEW cov: 12191 ft: 13804 corp: 9/520b lim: 85 exec/s: 0 rss: 72Mb L: 82/82 MS: 1 ChangeBinInt- 00:08:28.714 [2024-07-12 14:37:05.277887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.714 [2024-07-12 14:37:05.277914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.714 [2024-07-12 14:37:05.277953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.714 [2024-07-12 14:37:05.277969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.278022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.715 [2024-07-12 14:37:05.278037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.715 #44 NEW cov: 12191 ft: 13877 corp: 10/573b lim: 85 exec/s: 0 rss: 72Mb L: 53/82 MS: 1 InsertByte- 00:08:28.715 [2024-07-12 14:37:05.317725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.715 [2024-07-12 14:37:05.317751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.715 #47 NEW cov: 12191 ft: 14751 corp: 11/594b lim: 85 exec/s: 0 rss: 72Mb L: 21/82 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:28.715 [2024-07-12 14:37:05.358103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.715 [2024-07-12 14:37:05.358129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.358191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.715 [2024-07-12 14:37:05.358207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.358261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.715 [2024-07-12 14:37:05.358276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.715 #48 NEW cov: 12191 ft: 14769 corp: 12/661b lim: 85 exec/s: 0 rss: 72Mb L: 67/82 MS: 1 ShuffleBytes- 00:08:28.715 [2024-07-12 14:37:05.408292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.715 [2024-07-12 14:37:05.408320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.408376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.715 [2024-07-12 14:37:05.408392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.408445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.715 [2024-07-12 14:37:05.408461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.715 #49 NEW cov: 12191 ft: 14782 corp: 13/713b lim: 85 exec/s: 0 rss: 72Mb L: 52/82 MS: 1 ChangeBit- 00:08:28.715 [2024-07-12 14:37:05.458381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.715 [2024-07-12 14:37:05.458408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.458461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.715 [2024-07-12 14:37:05.458477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.458534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.715 [2024-07-12 14:37:05.458549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.715 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:28.715 #50 NEW cov: 12214 ft: 14833 corp: 14/765b lim: 85 exec/s: 0 rss: 73Mb L: 52/82 MS: 1 ChangeBit- 00:08:28.715 [2024-07-12 14:37:05.498539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.715 [2024-07-12 14:37:05.498568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.498614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.715 [2024-07-12 14:37:05.498630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.715 [2024-07-12 14:37:05.498684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.715 [2024-07-12 14:37:05.498701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.974 #51 NEW cov: 12214 ft: 14946 corp: 15/822b lim: 85 exec/s: 0 rss: 73Mb L: 57/82 MS: 1 InsertRepeatedBytes- 00:08:28.974 [2024-07-12 14:37:05.548659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.974 [2024-07-12 14:37:05.548686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.548723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.974 [2024-07-12 14:37:05.548738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.548791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.974 [2024-07-12 14:37:05.548807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.974 #52 NEW cov: 12214 ft: 14956 corp: 16/874b lim: 85 exec/s: 0 rss: 73Mb L: 52/82 MS: 1 ShuffleBytes- 00:08:28.974 [2024-07-12 14:37:05.588866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.974 [2024-07-12 14:37:05.588893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.588935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.974 [2024-07-12 14:37:05.588951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.589004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.974 [2024-07-12 14:37:05.589020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.589072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:28.974 [2024-07-12 14:37:05.589087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.974 #53 NEW cov: 12214 ft: 14979 corp: 17/958b lim: 85 exec/s: 53 rss: 73Mb L: 84/84 MS: 1 CopyPart- 00:08:28.974 [2024-07-12 14:37:05.639064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.974 [2024-07-12 14:37:05.639091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.639138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.974 [2024-07-12 14:37:05.639153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.639207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.974 [2024-07-12 14:37:05.639222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.639275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:28.974 [2024-07-12 14:37:05.639291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.974 #54 NEW cov: 12214 ft: 14997 corp: 18/1036b lim: 85 exec/s: 54 rss: 73Mb L: 78/84 MS: 1 EraseBytes- 00:08:28.974 [2024-07-12 14:37:05.679000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.974 [2024-07-12 14:37:05.679027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.679069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.974 [2024-07-12 14:37:05.679086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.679141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.974 [2024-07-12 14:37:05.679156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.974 #55 NEW cov: 12214 ft: 15066 corp: 19/1089b lim: 85 exec/s: 55 rss: 73Mb L: 53/84 MS: 1 InsertByte- 00:08:28.974 [2024-07-12 14:37:05.739279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:28.974 [2024-07-12 14:37:05.739308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.739347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:28.974 [2024-07-12 14:37:05.739366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.974 [2024-07-12 14:37:05.739423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:28.974 [2024-07-12 14:37:05.739440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.232 #56 NEW cov: 12214 ft: 15089 corp: 20/1141b lim: 85 exec/s: 56 rss: 73Mb L: 52/84 MS: 1 CopyPart- 00:08:29.232 [2024-07-12 14:37:05.779310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.232 [2024-07-12 14:37:05.779338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.779390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.232 [2024-07-12 14:37:05.779405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.779459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.232 [2024-07-12 14:37:05.779474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.232 #57 NEW cov: 12214 ft: 15109 corp: 21/1204b lim: 85 exec/s: 57 rss: 73Mb L: 63/84 MS: 1 InsertRepeatedBytes- 00:08:29.232 [2024-07-12 14:37:05.829433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.232 [2024-07-12 14:37:05.829460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.829495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.232 [2024-07-12 14:37:05.829511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.829585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.232 [2024-07-12 14:37:05.829601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.232 #58 NEW cov: 12214 ft: 15119 corp: 22/1271b lim: 85 exec/s: 58 rss: 73Mb L: 67/84 MS: 1 EraseBytes- 00:08:29.232 [2024-07-12 14:37:05.879603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.232 [2024-07-12 14:37:05.879629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.879676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.232 [2024-07-12 14:37:05.879691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.879744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.232 [2024-07-12 14:37:05.879758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.232 #59 NEW cov: 12214 ft: 15140 corp: 23/1334b lim: 85 exec/s: 59 rss: 73Mb L: 63/84 MS: 1 ChangeBinInt- 00:08:29.232 [2024-07-12 14:37:05.929752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.232 [2024-07-12 14:37:05.929777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.929824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.232 [2024-07-12 14:37:05.929840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.929896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.232 [2024-07-12 14:37:05.929912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.232 #60 NEW cov: 12214 ft: 15147 corp: 24/1401b lim: 85 exec/s: 60 rss: 73Mb L: 67/84 MS: 1 ShuffleBytes- 00:08:29.232 [2024-07-12 14:37:05.979906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.232 [2024-07-12 14:37:05.979933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.232 [2024-07-12 14:37:05.979986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.233 [2024-07-12 14:37:05.980001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.233 [2024-07-12 14:37:05.980057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.233 [2024-07-12 14:37:05.980072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.233 #61 NEW cov: 12214 ft: 15175 corp: 25/1468b lim: 85 exec/s: 61 rss: 73Mb L: 67/84 MS: 1 ChangeBit- 00:08:29.233 [2024-07-12 14:37:06.019744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.233 [2024-07-12 14:37:06.019771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.491 #64 NEW cov: 12214 ft: 15204 corp: 26/1493b lim: 85 exec/s: 64 rss: 73Mb L: 25/84 MS: 3 ShuffleBytes-ChangeByte-CrossOver- 00:08:29.491 [2024-07-12 14:37:06.060137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.491 [2024-07-12 14:37:06.060163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.491 [2024-07-12 14:37:06.060201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.491 [2024-07-12 14:37:06.060215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.491 [2024-07-12 14:37:06.060266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.491 [2024-07-12 14:37:06.060297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.491 #65 NEW cov: 12214 ft: 15213 corp: 27/1560b lim: 85 exec/s: 65 rss: 73Mb L: 67/84 MS: 1 ChangeByte- 00:08:29.491 [2024-07-12 14:37:06.110278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.491 [2024-07-12 14:37:06.110305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.491 [2024-07-12 14:37:06.110347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.491 [2024-07-12 14:37:06.110363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.491 [2024-07-12 14:37:06.110415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.491 [2024-07-12 14:37:06.110431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.491 #66 NEW cov: 12214 ft: 15215 corp: 28/1617b lim: 85 exec/s: 66 rss: 74Mb L: 57/84 MS: 1 CopyPart- 00:08:29.491 [2024-07-12 14:37:06.160116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.491 [2024-07-12 14:37:06.160143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.491 #67 NEW cov: 12214 ft: 15217 corp: 29/1638b lim: 85 exec/s: 67 rss: 74Mb L: 21/84 MS: 1 ChangeByte- 00:08:29.491 [2024-07-12 14:37:06.210251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.491 [2024-07-12 14:37:06.210278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.491 #68 NEW cov: 12214 ft: 15230 corp: 30/1659b lim: 85 exec/s: 68 rss: 74Mb L: 21/84 MS: 1 CMP- DE: "\377$}q~\354\314\270"- 00:08:29.491 [2024-07-12 14:37:06.250648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.491 [2024-07-12 14:37:06.250676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.491 [2024-07-12 14:37:06.250729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.491 [2024-07-12 14:37:06.250745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.491 [2024-07-12 14:37:06.250798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.491 [2024-07-12 14:37:06.250814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.491 #69 NEW cov: 12214 ft: 15245 corp: 31/1726b lim: 85 exec/s: 69 rss: 74Mb L: 67/84 MS: 1 ChangeByte- 00:08:29.749 [2024-07-12 14:37:06.290617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.749 [2024-07-12 14:37:06.290644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.290693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.749 [2024-07-12 14:37:06.290709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.749 #73 NEW cov: 12214 ft: 15550 corp: 32/1769b lim: 85 exec/s: 73 rss: 74Mb L: 43/84 MS: 4 PersAutoDict-CrossOver-ChangeBinInt-InsertRepeatedBytes- DE: "\377$}q~\354\314\270"- 00:08:29.749 [2024-07-12 14:37:06.330721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.749 [2024-07-12 14:37:06.330748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.330806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.749 [2024-07-12 14:37:06.330822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.749 #74 NEW cov: 12214 ft: 15551 corp: 33/1807b lim: 85 exec/s: 74 rss: 74Mb L: 38/84 MS: 1 EraseBytes- 00:08:29.749 [2024-07-12 14:37:06.380724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.749 [2024-07-12 14:37:06.380750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.749 #75 NEW cov: 12214 ft: 15567 corp: 34/1828b lim: 85 exec/s: 75 rss: 74Mb L: 21/84 MS: 1 ChangeBit- 00:08:29.749 [2024-07-12 14:37:06.431177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.749 [2024-07-12 14:37:06.431203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.431255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.749 [2024-07-12 14:37:06.431270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.431324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.749 [2024-07-12 14:37:06.431343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.749 #76 NEW cov: 12214 ft: 15577 corp: 35/1895b lim: 85 exec/s: 76 rss: 74Mb L: 67/84 MS: 1 ChangeBinInt- 00:08:29.749 [2024-07-12 14:37:06.481326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.749 [2024-07-12 14:37:06.481353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.481406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.749 [2024-07-12 14:37:06.481422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.481476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.749 [2024-07-12 14:37:06.481491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.749 #77 NEW cov: 12214 ft: 15635 corp: 36/1962b lim: 85 exec/s: 77 rss: 74Mb L: 67/84 MS: 1 ChangeBinInt- 00:08:29.749 [2024-07-12 14:37:06.531453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:29.749 [2024-07-12 14:37:06.531481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.531517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:29.749 [2024-07-12 14:37:06.531536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.749 [2024-07-12 14:37:06.531588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:29.749 [2024-07-12 14:37:06.531603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.008 #78 NEW cov: 12214 ft: 15648 corp: 37/2023b lim: 85 exec/s: 78 rss: 74Mb L: 61/84 MS: 1 EraseBytes- 00:08:30.008 [2024-07-12 14:37:06.571719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:30.008 [2024-07-12 14:37:06.571746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.571793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:30.008 [2024-07-12 14:37:06.571808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.571859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:30.008 [2024-07-12 14:37:06.571875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.571929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:30.008 [2024-07-12 14:37:06.571944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.611845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:30.008 [2024-07-12 14:37:06.611871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.611932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:30.008 [2024-07-12 14:37:06.611948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.612002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:30.008 [2024-07-12 14:37:06.612018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.612069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:30.008 [2024-07-12 14:37:06.612084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.651961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:30.008 [2024-07-12 14:37:06.651987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.652035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:30.008 [2024-07-12 14:37:06.652050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.652102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:30.008 [2024-07-12 14:37:06.652117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.008 [2024-07-12 14:37:06.652169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:30.008 [2024-07-12 14:37:06.652185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.008 #81 NEW cov: 12214 ft: 15663 corp: 38/2099b lim: 85 exec/s: 40 rss: 74Mb L: 76/84 MS: 3 PersAutoDict-CopyPart-ChangeBit- DE: "\377$}q~\354\314\270"- 00:08:30.008 #81 DONE cov: 12214 ft: 15663 corp: 38/2099b lim: 85 exec/s: 40 rss: 74Mb 00:08:30.008 ###### Recommended dictionary. ###### 00:08:30.008 "\377$}q~\354\314\270" # Uses: 2 00:08:30.008 ###### End of recommended dictionary. ###### 00:08:30.008 Done 81 runs in 2 second(s) 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:30.266 14:37:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:30.266 [2024-07-12 14:37:06.853793] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:30.266 [2024-07-12 14:37:06.853865] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431687 ] 00:08:30.266 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.523 [2024-07-12 14:37:07.072773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.523 [2024-07-12 14:37:07.148042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.523 [2024-07-12 14:37:07.207450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.524 [2024-07-12 14:37:07.223650] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:30.524 INFO: Running with entropic power schedule (0xFF, 100). 00:08:30.524 INFO: Seed: 1751644851 00:08:30.524 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:30.524 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:30.524 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:30.524 INFO: A corpus is not provided, starting from an empty corpus 00:08:30.524 #2 INITED exec/s: 0 rss: 64Mb 00:08:30.524 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:30.524 This may also happen if the target rejected all inputs we tried so far 00:08:30.524 [2024-07-12 14:37:07.282832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:30.524 [2024-07-12 14:37:07.282864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.040 NEW_FUNC[1/696]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:31.040 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.040 #7 NEW cov: 11903 ft: 11904 corp: 2/8b lim: 25 exec/s: 0 rss: 72Mb L: 7/7 MS: 5 CrossOver-CopyPart-ChangeByte-ChangeByte-CopyPart- 00:08:31.040 [2024-07-12 14:37:07.623831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.040 [2024-07-12 14:37:07.623895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.040 #8 NEW cov: 12033 ft: 12457 corp: 3/15b lim: 25 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeByte- 00:08:31.040 [2024-07-12 14:37:07.683797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.040 [2024-07-12 14:37:07.683827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.040 #9 NEW cov: 12039 ft: 12710 corp: 4/22b lim: 25 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeBit- 00:08:31.040 [2024-07-12 14:37:07.733876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.040 [2024-07-12 14:37:07.733906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.040 #10 NEW cov: 12124 ft: 13196 corp: 5/29b lim: 25 exec/s: 0 rss: 72Mb L: 7/7 MS: 1 ChangeByte- 00:08:31.040 [2024-07-12 14:37:07.774117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.040 [2024-07-12 14:37:07.774144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.040 [2024-07-12 14:37:07.774196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:31.040 [2024-07-12 14:37:07.774214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.040 #11 NEW cov: 12124 ft: 13672 corp: 6/40b lim: 25 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 CrossOver- 00:08:31.040 [2024-07-12 14:37:07.814129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.040 [2024-07-12 14:37:07.814156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.298 #12 NEW cov: 12124 ft: 13751 corp: 7/47b lim: 25 exec/s: 0 rss: 72Mb L: 7/11 MS: 1 ShuffleBytes- 00:08:31.298 [2024-07-12 14:37:07.854344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.298 [2024-07-12 14:37:07.854371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.298 [2024-07-12 14:37:07.854425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:31.298 [2024-07-12 14:37:07.854441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.298 #13 NEW cov: 12124 ft: 13862 corp: 8/60b lim: 25 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CopyPart- 00:08:31.298 [2024-07-12 14:37:07.894338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.298 [2024-07-12 14:37:07.894366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.298 #15 NEW cov: 12124 ft: 13948 corp: 9/65b lim: 25 exec/s: 0 rss: 72Mb L: 5/13 MS: 2 EraseBytes-InsertByte- 00:08:31.298 [2024-07-12 14:37:07.934570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.298 [2024-07-12 14:37:07.934595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.298 [2024-07-12 14:37:07.934650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:31.298 [2024-07-12 14:37:07.934678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.298 #16 NEW cov: 12124 ft: 13967 corp: 10/76b lim: 25 exec/s: 0 rss: 72Mb L: 11/13 MS: 1 CrossOver- 00:08:31.298 [2024-07-12 14:37:07.974531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.298 [2024-07-12 14:37:07.974557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.298 #17 NEW cov: 12124 ft: 14005 corp: 11/83b lim: 25 exec/s: 0 rss: 72Mb L: 7/13 MS: 1 CrossOver- 00:08:31.298 [2024-07-12 14:37:08.024708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.298 [2024-07-12 14:37:08.024736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.298 #18 NEW cov: 12124 ft: 14015 corp: 12/90b lim: 25 exec/s: 0 rss: 72Mb L: 7/13 MS: 1 ShuffleBytes- 00:08:31.298 [2024-07-12 14:37:08.064829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.298 [2024-07-12 14:37:08.064857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.298 #19 NEW cov: 12124 ft: 14043 corp: 13/98b lim: 25 exec/s: 0 rss: 72Mb L: 8/13 MS: 1 InsertByte- 00:08:31.556 [2024-07-12 14:37:08.104978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.556 [2024-07-12 14:37:08.105005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.556 #20 NEW cov: 12124 ft: 14060 corp: 14/106b lim: 25 exec/s: 0 rss: 73Mb L: 8/13 MS: 1 EraseBytes- 00:08:31.556 [2024-07-12 14:37:08.155064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.556 [2024-07-12 14:37:08.155093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.556 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:31.556 #21 NEW cov: 12147 ft: 14147 corp: 15/112b lim: 25 exec/s: 0 rss: 73Mb L: 6/13 MS: 1 EraseBytes- 00:08:31.556 [2024-07-12 14:37:08.205186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.556 [2024-07-12 14:37:08.205213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.556 #22 NEW cov: 12147 ft: 14164 corp: 16/119b lim: 25 exec/s: 0 rss: 73Mb L: 7/13 MS: 1 ChangeByte- 00:08:31.556 [2024-07-12 14:37:08.245347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.556 [2024-07-12 14:37:08.245375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.556 #26 NEW cov: 12147 ft: 14179 corp: 17/124b lim: 25 exec/s: 26 rss: 73Mb L: 5/13 MS: 4 EraseBytes-CopyPart-ChangeBinInt-InsertByte- 00:08:31.556 [2024-07-12 14:37:08.295601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.556 [2024-07-12 14:37:08.295631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.556 [2024-07-12 14:37:08.295700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:31.556 [2024-07-12 14:37:08.295716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.556 #27 NEW cov: 12147 ft: 14191 corp: 18/137b lim: 25 exec/s: 27 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:08:31.556 [2024-07-12 14:37:08.335726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.556 [2024-07-12 14:37:08.335755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.556 [2024-07-12 14:37:08.335825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:31.556 [2024-07-12 14:37:08.335840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.814 #29 NEW cov: 12147 ft: 14206 corp: 19/150b lim: 25 exec/s: 29 rss: 73Mb L: 13/13 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:31.814 [2024-07-12 14:37:08.375827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.814 [2024-07-12 14:37:08.375858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.814 [2024-07-12 14:37:08.375912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:31.814 [2024-07-12 14:37:08.375928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.814 #30 NEW cov: 12147 ft: 14214 corp: 20/163b lim: 25 exec/s: 30 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:08:31.814 [2024-07-12 14:37:08.425841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.814 [2024-07-12 14:37:08.425871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.814 #31 NEW cov: 12147 ft: 14310 corp: 21/172b lim: 25 exec/s: 31 rss: 73Mb L: 9/13 MS: 1 EraseBytes- 00:08:31.814 [2024-07-12 14:37:08.465947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.814 [2024-07-12 14:37:08.465975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.814 #32 NEW cov: 12147 ft: 14318 corp: 22/180b lim: 25 exec/s: 32 rss: 73Mb L: 8/13 MS: 1 CopyPart- 00:08:31.814 [2024-07-12 14:37:08.516130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.814 [2024-07-12 14:37:08.516159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.814 #33 NEW cov: 12147 ft: 14354 corp: 23/186b lim: 25 exec/s: 33 rss: 73Mb L: 6/13 MS: 1 EraseBytes- 00:08:31.814 [2024-07-12 14:37:08.566203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:31.814 [2024-07-12 14:37:08.566232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.814 #34 NEW cov: 12147 ft: 14407 corp: 24/193b lim: 25 exec/s: 34 rss: 73Mb L: 7/13 MS: 1 InsertByte- 00:08:32.072 [2024-07-12 14:37:08.616597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.072 [2024-07-12 14:37:08.616625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.072 [2024-07-12 14:37:08.616689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.072 [2024-07-12 14:37:08.616706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.072 [2024-07-12 14:37:08.616760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:32.072 [2024-07-12 14:37:08.616776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.072 #35 NEW cov: 12147 ft: 14738 corp: 25/209b lim: 25 exec/s: 35 rss: 73Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:32.072 [2024-07-12 14:37:08.666468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.072 [2024-07-12 14:37:08.666495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.072 #36 NEW cov: 12147 ft: 14831 corp: 26/216b lim: 25 exec/s: 36 rss: 73Mb L: 7/16 MS: 1 ChangeByte- 00:08:32.072 [2024-07-12 14:37:08.716595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.072 [2024-07-12 14:37:08.716622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.072 #37 NEW cov: 12147 ft: 14842 corp: 27/222b lim: 25 exec/s: 37 rss: 73Mb L: 6/16 MS: 1 CopyPart- 00:08:32.072 [2024-07-12 14:37:08.756810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.072 [2024-07-12 14:37:08.756836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.072 [2024-07-12 14:37:08.756891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.072 [2024-07-12 14:37:08.756908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.072 #38 NEW cov: 12147 ft: 14853 corp: 28/234b lim: 25 exec/s: 38 rss: 73Mb L: 12/16 MS: 1 CrossOver- 00:08:32.072 [2024-07-12 14:37:08.797002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.072 [2024-07-12 14:37:08.797030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.072 [2024-07-12 14:37:08.797068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.072 [2024-07-12 14:37:08.797083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.072 #39 NEW cov: 12147 ft: 14859 corp: 29/247b lim: 25 exec/s: 39 rss: 73Mb L: 13/16 MS: 1 ChangeBinInt- 00:08:32.072 [2024-07-12 14:37:08.846950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.072 [2024-07-12 14:37:08.846977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.330 #42 NEW cov: 12147 ft: 14900 corp: 30/252b lim: 25 exec/s: 42 rss: 73Mb L: 5/16 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:08:32.330 [2024-07-12 14:37:08.897247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.330 [2024-07-12 14:37:08.897274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.330 [2024-07-12 14:37:08.897345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.330 [2024-07-12 14:37:08.897360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.330 #43 NEW cov: 12147 ft: 14906 corp: 31/264b lim: 25 exec/s: 43 rss: 73Mb L: 12/16 MS: 1 CrossOver- 00:08:32.330 [2024-07-12 14:37:08.937345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.330 [2024-07-12 14:37:08.937371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.330 [2024-07-12 14:37:08.937439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.330 [2024-07-12 14:37:08.937455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.330 #44 NEW cov: 12147 ft: 14927 corp: 32/275b lim: 25 exec/s: 44 rss: 74Mb L: 11/16 MS: 1 ChangeBit- 00:08:32.330 [2024-07-12 14:37:08.987335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.330 [2024-07-12 14:37:08.987361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.330 #45 NEW cov: 12147 ft: 14958 corp: 33/282b lim: 25 exec/s: 45 rss: 74Mb L: 7/16 MS: 1 ChangeBinInt- 00:08:32.330 [2024-07-12 14:37:09.037504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.330 [2024-07-12 14:37:09.037533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.330 #46 NEW cov: 12147 ft: 15015 corp: 34/290b lim: 25 exec/s: 46 rss: 74Mb L: 8/16 MS: 1 CrossOver- 00:08:32.330 [2024-07-12 14:37:09.087746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.330 [2024-07-12 14:37:09.087772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.330 [2024-07-12 14:37:09.087828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.330 [2024-07-12 14:37:09.087843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.128063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.589 [2024-07-12 14:37:09.128090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.128163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.589 [2024-07-12 14:37:09.128179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.128234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:32.589 [2024-07-12 14:37:09.128249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.128305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:32.589 [2024-07-12 14:37:09.128324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.589 #48 NEW cov: 12147 ft: 15432 corp: 35/311b lim: 25 exec/s: 48 rss: 74Mb L: 21/21 MS: 2 CrossOver-CMP- DE: "\005\000\000\000\000\000\000\000"- 00:08:32.589 [2024-07-12 14:37:09.168206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.589 [2024-07-12 14:37:09.168233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.168305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.589 [2024-07-12 14:37:09.168322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.168379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:32.589 [2024-07-12 14:37:09.168395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.168455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:32.589 [2024-07-12 14:37:09.168469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.589 #49 NEW cov: 12147 ft: 15451 corp: 36/333b lim: 25 exec/s: 49 rss: 74Mb L: 22/22 MS: 1 InsertByte- 00:08:32.589 [2024-07-12 14:37:09.217995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.589 [2024-07-12 14:37:09.218022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.589 #50 NEW cov: 12147 ft: 15458 corp: 37/339b lim: 25 exec/s: 50 rss: 74Mb L: 6/22 MS: 1 InsertByte- 00:08:32.589 [2024-07-12 14:37:09.258228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:32.589 [2024-07-12 14:37:09.258255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.589 [2024-07-12 14:37:09.258312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:32.589 [2024-07-12 14:37:09.258328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.589 #51 NEW cov: 12147 ft: 15463 corp: 38/351b lim: 25 exec/s: 25 rss: 74Mb L: 12/22 MS: 1 ChangeBinInt- 00:08:32.589 #51 DONE cov: 12147 ft: 15463 corp: 38/351b lim: 25 exec/s: 25 rss: 74Mb 00:08:32.589 ###### Recommended dictionary. ###### 00:08:32.589 "\005\000\000\000\000\000\000\000" # Uses: 0 00:08:32.589 ###### End of recommended dictionary. ###### 00:08:32.589 Done 51 runs in 2 second(s) 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:32.848 14:37:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:32.848 [2024-07-12 14:37:09.477253] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:32.848 [2024-07-12 14:37:09.477327] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432037 ] 00:08:32.848 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.106 [2024-07-12 14:37:09.696264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.106 [2024-07-12 14:37:09.770921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.106 [2024-07-12 14:37:09.830893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.106 [2024-07-12 14:37:09.847086] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:33.106 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.106 INFO: Seed: 81669396 00:08:33.106 INFO: Loaded 1 modules (357813 inline 8-bit counters): 357813 [0x29ab10c, 0x2a026c1), 00:08:33.106 INFO: Loaded 1 PC tables (357813 PCs): 357813 [0x2a026c8,0x2f78218), 00:08:33.106 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:33.106 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.106 #2 INITED exec/s: 0 rss: 65Mb 00:08:33.106 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.106 This may also happen if the target rejected all inputs we tried so far 00:08:33.364 [2024-07-12 14:37:09.912365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.364 [2024-07-12 14:37:09.912397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.623 NEW_FUNC[1/697]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:33.623 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:33.623 #9 NEW cov: 11975 ft: 11976 corp: 2/36b lim: 100 exec/s: 0 rss: 72Mb L: 35/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:33.623 [2024-07-12 14:37:10.263306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.623 [2024-07-12 14:37:10.263369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.623 #10 NEW cov: 12105 ft: 12682 corp: 3/66b lim: 100 exec/s: 0 rss: 72Mb L: 30/35 MS: 1 EraseBytes- 00:08:33.623 [2024-07-12 14:37:10.323295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.623 [2024-07-12 14:37:10.323327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.623 #11 NEW cov: 12111 ft: 12968 corp: 4/101b lim: 100 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:08:33.623 [2024-07-12 14:37:10.363362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.623 [2024-07-12 14:37:10.363390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.623 #14 NEW cov: 12196 ft: 13296 corp: 5/138b lim: 100 exec/s: 0 rss: 72Mb L: 37/37 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:33.623 [2024-07-12 14:37:10.403505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861794777529982 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.623 [2024-07-12 14:37:10.403539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.881 #15 NEW cov: 12196 ft: 13405 corp: 6/173b lim: 100 exec/s: 0 rss: 72Mb L: 35/37 MS: 1 ChangeBinInt- 00:08:33.881 [2024-07-12 14:37:10.443688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.881 [2024-07-12 14:37:10.443715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.881 #16 NEW cov: 12196 ft: 13460 corp: 7/203b lim: 100 exec/s: 0 rss: 72Mb L: 30/37 MS: 1 ChangeBinInt- 00:08:33.881 [2024-07-12 14:37:10.493733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.881 [2024-07-12 14:37:10.493760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.881 #17 NEW cov: 12196 ft: 13497 corp: 8/233b lim: 100 exec/s: 0 rss: 72Mb L: 30/37 MS: 1 ShuffleBytes- 00:08:33.881 [2024-07-12 14:37:10.543900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32272 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.881 [2024-07-12 14:37:10.543927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.881 #18 NEW cov: 12196 ft: 13613 corp: 9/264b lim: 100 exec/s: 0 rss: 72Mb L: 31/37 MS: 1 InsertByte- 00:08:33.881 [2024-07-12 14:37:10.594173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.881 [2024-07-12 14:37:10.594200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.881 [2024-07-12 14:37:10.594265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7957419012188434030 len:28271 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.881 [2024-07-12 14:37:10.594282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.881 #19 NEW cov: 12196 ft: 14437 corp: 10/316b lim: 100 exec/s: 0 rss: 72Mb L: 52/52 MS: 1 InsertRepeatedBytes- 00:08:33.881 [2024-07-12 14:37:10.634130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.881 [2024-07-12 14:37:10.634157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.139 #20 NEW cov: 12196 ft: 14503 corp: 11/351b lim: 100 exec/s: 0 rss: 72Mb L: 35/52 MS: 1 ShuffleBytes- 00:08:34.139 [2024-07-12 14:37:10.684725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861794777529982 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.684752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.139 [2024-07-12 14:37:10.684796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.684813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.139 [2024-07-12 14:37:10.684866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.684882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.139 [2024-07-12 14:37:10.684935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.684951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.139 #21 NEW cov: 12196 ft: 14948 corp: 12/450b lim: 100 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:08:34.139 [2024-07-12 14:37:10.734413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.734440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.139 #27 NEW cov: 12196 ft: 14962 corp: 13/485b lim: 100 exec/s: 0 rss: 72Mb L: 35/99 MS: 1 ShuffleBytes- 00:08:34.139 [2024-07-12 14:37:10.774555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.774583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.139 NEW_FUNC[1/1]: 0x1a7e0d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:34.139 #28 NEW cov: 12219 ft: 15056 corp: 14/505b lim: 100 exec/s: 0 rss: 73Mb L: 20/99 MS: 1 EraseBytes- 00:08:34.139 [2024-07-12 14:37:10.814646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.814674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.139 #29 NEW cov: 12219 ft: 15076 corp: 15/533b lim: 100 exec/s: 0 rss: 73Mb L: 28/99 MS: 1 EraseBytes- 00:08:34.139 [2024-07-12 14:37:10.854761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.854790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.139 #30 NEW cov: 12219 ft: 15100 corp: 16/564b lim: 100 exec/s: 0 rss: 73Mb L: 31/99 MS: 1 InsertByte- 00:08:34.139 [2024-07-12 14:37:10.895456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861794777529982 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.895485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.139 [2024-07-12 14:37:10.895534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.895549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.139 [2024-07-12 14:37:10.895601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.895617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.139 [2024-07-12 14:37:10.895672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.895686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.139 [2024-07-12 14:37:10.895737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.139 [2024-07-12 14:37:10.895751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:34.397 #31 NEW cov: 12219 ft: 15182 corp: 17/664b lim: 100 exec/s: 31 rss: 73Mb L: 100/100 MS: 1 CopyPart- 00:08:34.397 [2024-07-12 14:37:10.945015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:10.945045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.397 [2024-07-12 14:37:10.985254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:10.985283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.397 [2024-07-12 14:37:10.985332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:10.985348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.397 #33 NEW cov: 12219 ft: 15215 corp: 18/705b lim: 100 exec/s: 33 rss: 73Mb L: 41/100 MS: 2 CopyPart-CMP- DE: "\012\000\000\000"- 00:08:34.397 [2024-07-12 14:37:11.025268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.025297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.397 #34 NEW cov: 12219 ft: 15310 corp: 19/743b lim: 100 exec/s: 34 rss: 73Mb L: 38/100 MS: 1 InsertByte- 00:08:34.397 [2024-07-12 14:37:11.075422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.075452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.397 #35 NEW cov: 12219 ft: 15336 corp: 20/780b lim: 100 exec/s: 35 rss: 73Mb L: 37/100 MS: 1 ChangeByte- 00:08:34.397 [2024-07-12 14:37:11.116036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861794777529982 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.116063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.397 [2024-07-12 14:37:11.116134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.116151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.397 [2024-07-12 14:37:11.116205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.116221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.397 [2024-07-12 14:37:11.116274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.116293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.397 [2024-07-12 14:37:11.116351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.116367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:34.397 #36 NEW cov: 12219 ft: 15350 corp: 21/880b lim: 100 exec/s: 36 rss: 73Mb L: 100/100 MS: 1 ShuffleBytes- 00:08:34.397 [2024-07-12 14:37:11.165578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.397 [2024-07-12 14:37:11.165606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.656 #37 NEW cov: 12219 ft: 15371 corp: 22/911b lim: 100 exec/s: 37 rss: 73Mb L: 31/100 MS: 1 CopyPart- 00:08:34.656 [2024-07-12 14:37:11.215902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.215928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.656 [2024-07-12 14:37:11.215965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9082071600668245630 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.215981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.656 #38 NEW cov: 12219 ft: 15395 corp: 23/952b lim: 100 exec/s: 38 rss: 73Mb L: 41/100 MS: 1 CopyPart- 00:08:34.656 [2024-07-12 14:37:11.266202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11646767826930344353 len:41378 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.266229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.656 [2024-07-12 14:37:11.266275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:11636877569514643873 len:7807 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.266291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.656 [2024-07-12 14:37:11.266345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7957419012188434030 len:28271 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.266361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.656 #44 NEW cov: 12219 ft: 15668 corp: 24/1029b lim: 100 exec/s: 44 rss: 73Mb L: 77/100 MS: 1 InsertRepeatedBytes- 00:08:34.656 [2024-07-12 14:37:11.316016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179833437822 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.316042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.656 #45 NEW cov: 12219 ft: 15679 corp: 25/1060b lim: 100 exec/s: 45 rss: 73Mb L: 31/100 MS: 1 ChangeByte- 00:08:34.656 [2024-07-12 14:37:11.366165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.366193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.656 #47 NEW cov: 12219 ft: 15698 corp: 26/1088b lim: 100 exec/s: 47 rss: 73Mb L: 28/100 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:34.656 [2024-07-12 14:37:11.406419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.406449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.656 [2024-07-12 14:37:11.406500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9082071600668245630 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.656 [2024-07-12 14:37:11.406514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.656 #48 NEW cov: 12219 ft: 15728 corp: 27/1129b lim: 100 exec/s: 48 rss: 73Mb L: 41/100 MS: 1 ShuffleBytes- 00:08:34.914 [2024-07-12 14:37:11.456571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179832389246 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.456597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.914 [2024-07-12 14:37:11.456638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7957419012188434030 len:28271 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.456654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.914 #49 NEW cov: 12219 ft: 15740 corp: 28/1181b lim: 100 exec/s: 49 rss: 73Mb L: 52/100 MS: 1 ChangeBit- 00:08:34.914 [2024-07-12 14:37:11.496662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9087840179832389246 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.496689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.914 [2024-07-12 14:37:11.496741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7957419012591087214 len:28271 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.496758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.914 #50 NEW cov: 12219 ft: 15743 corp: 29/1234b lim: 100 exec/s: 50 rss: 73Mb L: 53/100 MS: 1 InsertByte- 00:08:34.914 [2024-07-12 14:37:11.546938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11646767826930344353 len:41378 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.546965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.914 [2024-07-12 14:37:11.547024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:11636877569514643873 len:7807 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.547039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.914 [2024-07-12 14:37:11.547094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7957419012188434030 len:28271 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.547110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.914 #51 NEW cov: 12219 ft: 15760 corp: 30/1311b lim: 100 exec/s: 51 rss: 73Mb L: 77/100 MS: 1 ChangeBinInt- 00:08:34.914 [2024-07-12 14:37:11.596939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.596965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.914 [2024-07-12 14:37:11.597037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9114861777597660798 len:32387 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.597053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.914 #52 NEW cov: 12219 ft: 15790 corp: 31/1352b lim: 100 exec/s: 52 rss: 73Mb L: 41/100 MS: 1 ChangeBinInt- 00:08:34.914 [2024-07-12 14:37:11.636890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.636917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.914 #53 NEW cov: 12219 ft: 15796 corp: 32/1389b lim: 100 exec/s: 53 rss: 74Mb L: 37/100 MS: 1 CMP- DE: "A\000"- 00:08:34.914 [2024-07-12 14:37:11.687294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7957419012456869486 len:28271 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.687320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.914 [2024-07-12 14:37:11.687361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9114861777328172670 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.914 [2024-07-12 14:37:11.687376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.914 [2024-07-12 14:37:11.687430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.915 [2024-07-12 14:37:11.687444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.173 #54 NEW cov: 12219 ft: 15797 corp: 33/1462b lim: 100 exec/s: 54 rss: 74Mb L: 73/100 MS: 1 CrossOver- 00:08:35.173 [2024-07-12 14:37:11.727291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660796 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.727318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.173 [2024-07-12 14:37:11.727360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.727376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.173 #55 NEW cov: 12219 ft: 15828 corp: 34/1503b lim: 100 exec/s: 55 rss: 74Mb L: 41/100 MS: 1 ChangeBit- 00:08:35.173 [2024-07-12 14:37:11.767220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861794777529982 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.767247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.173 #61 NEW cov: 12219 ft: 15847 corp: 35/1540b lim: 100 exec/s: 61 rss: 74Mb L: 37/100 MS: 1 PersAutoDict- DE: "A\000"- 00:08:35.173 [2024-07-12 14:37:11.807773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:72057589742960640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.807800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.173 [2024-07-12 14:37:11.807846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.807863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.173 [2024-07-12 14:37:11.807915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.807931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.173 [2024-07-12 14:37:11.807985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.808001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.173 #62 NEW cov: 12219 ft: 15858 corp: 36/1631b lim: 100 exec/s: 62 rss: 74Mb L: 91/100 MS: 1 InsertRepeatedBytes- 00:08:35.173 [2024-07-12 14:37:11.857655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861777597660798 len:32383 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.857681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.173 [2024-07-12 14:37:11.857717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9114861777597660798 len:32512 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.857732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.173 #63 NEW cov: 12219 ft: 15918 corp: 37/1673b lim: 100 exec/s: 63 rss: 74Mb L: 42/100 MS: 1 InsertRepeatedBytes- 00:08:35.173 [2024-07-12 14:37:11.898052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9114861794777529982 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.898080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.173 [2024-07-12 14:37:11.898141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.173 [2024-07-12 14:37:11.898157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.174 [2024-07-12 14:37:11.898235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.174 [2024-07-12 14:37:11.898251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.174 [2024-07-12 14:37:11.898307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.174 [2024-07-12 14:37:11.898323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.174 #64 pulse cov: 12219 ft: 15927 corp: 37/1673b lim: 100 exec/s: 32 rss: 74Mb 00:08:35.174 #64 NEW cov: 12219 ft: 15927 corp: 38/1772b lim: 100 exec/s: 32 rss: 74Mb L: 99/100 MS: 1 ChangeByte- 00:08:35.174 #64 DONE cov: 12219 ft: 15927 corp: 38/1772b lim: 100 exec/s: 32 rss: 74Mb 00:08:35.174 ###### Recommended dictionary. ###### 00:08:35.174 "\012\000\000\000" # Uses: 1 00:08:35.174 "A\000" # Uses: 1 00:08:35.174 ###### End of recommended dictionary. ###### 00:08:35.174 Done 64 runs in 2 second(s) 00:08:35.433 14:37:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:35.433 14:37:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.433 14:37:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.433 14:37:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:35.433 00:08:35.433 real 1m5.858s 00:08:35.433 user 1m41.195s 00:08:35.433 sys 0m7.944s 00:08:35.433 14:37:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.433 14:37:12 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:35.433 ************************************ 00:08:35.433 END TEST nvmf_llvm_fuzz 00:08:35.433 ************************************ 00:08:35.433 14:37:12 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:08:35.433 14:37:12 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:35.433 14:37:12 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:35.433 14:37:12 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:35.433 14:37:12 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.433 14:37:12 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.433 14:37:12 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:35.433 ************************************ 00:08:35.433 START TEST vfio_llvm_fuzz 00:08:35.433 ************************************ 00:08:35.433 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:35.694 * Looking for test storage... 00:08:35.694 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:35.694 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:35.695 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:35.695 #define SPDK_CONFIG_H 00:08:35.695 #define SPDK_CONFIG_APPS 1 00:08:35.695 #define SPDK_CONFIG_ARCH native 00:08:35.695 #undef SPDK_CONFIG_ASAN 00:08:35.695 #undef SPDK_CONFIG_AVAHI 00:08:35.695 #undef SPDK_CONFIG_CET 00:08:35.695 #define SPDK_CONFIG_COVERAGE 1 00:08:35.695 #define SPDK_CONFIG_CROSS_PREFIX 00:08:35.695 #undef SPDK_CONFIG_CRYPTO 00:08:35.695 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:35.695 #undef SPDK_CONFIG_CUSTOMOCF 00:08:35.695 #undef SPDK_CONFIG_DAOS 00:08:35.695 #define SPDK_CONFIG_DAOS_DIR 00:08:35.695 #define SPDK_CONFIG_DEBUG 1 00:08:35.695 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:35.695 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:35.695 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:35.695 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:35.695 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:35.695 #undef SPDK_CONFIG_DPDK_UADK 00:08:35.695 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:35.695 #define SPDK_CONFIG_EXAMPLES 1 00:08:35.695 #undef SPDK_CONFIG_FC 00:08:35.695 #define SPDK_CONFIG_FC_PATH 00:08:35.695 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:35.695 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:35.695 #undef SPDK_CONFIG_FUSE 00:08:35.695 #define SPDK_CONFIG_FUZZER 1 00:08:35.695 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:35.695 #undef SPDK_CONFIG_GOLANG 00:08:35.695 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:35.695 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:35.695 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:35.695 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:35.695 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:35.695 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:35.695 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:35.695 #define SPDK_CONFIG_IDXD 1 00:08:35.695 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:35.695 #undef SPDK_CONFIG_IPSEC_MB 00:08:35.695 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:35.695 #define SPDK_CONFIG_ISAL 1 00:08:35.695 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:35.695 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:35.695 #define SPDK_CONFIG_LIBDIR 00:08:35.695 #undef SPDK_CONFIG_LTO 00:08:35.695 #define SPDK_CONFIG_MAX_LCORES 128 00:08:35.695 #define SPDK_CONFIG_NVME_CUSE 1 00:08:35.695 #undef SPDK_CONFIG_OCF 00:08:35.695 #define SPDK_CONFIG_OCF_PATH 00:08:35.695 #define SPDK_CONFIG_OPENSSL_PATH 00:08:35.696 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:35.696 #define SPDK_CONFIG_PGO_DIR 00:08:35.696 #undef SPDK_CONFIG_PGO_USE 00:08:35.696 #define SPDK_CONFIG_PREFIX /usr/local 00:08:35.696 #undef SPDK_CONFIG_RAID5F 00:08:35.696 #undef SPDK_CONFIG_RBD 00:08:35.696 #define SPDK_CONFIG_RDMA 1 00:08:35.696 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:35.696 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:35.696 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:35.696 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:35.696 #undef SPDK_CONFIG_SHARED 00:08:35.696 #undef SPDK_CONFIG_SMA 00:08:35.696 #define SPDK_CONFIG_TESTS 1 00:08:35.696 #undef SPDK_CONFIG_TSAN 00:08:35.696 #define SPDK_CONFIG_UBLK 1 00:08:35.696 #define SPDK_CONFIG_UBSAN 1 00:08:35.696 #undef SPDK_CONFIG_UNIT_TESTS 00:08:35.696 #undef SPDK_CONFIG_URING 00:08:35.696 #define SPDK_CONFIG_URING_PATH 00:08:35.696 #undef SPDK_CONFIG_URING_ZNS 00:08:35.696 #undef SPDK_CONFIG_USDT 00:08:35.696 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:35.696 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:35.696 #define SPDK_CONFIG_VFIO_USER 1 00:08:35.696 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:35.696 #define SPDK_CONFIG_VHOST 1 00:08:35.696 #define SPDK_CONFIG_VIRTIO 1 00:08:35.696 #undef SPDK_CONFIG_VTUNE 00:08:35.696 #define SPDK_CONFIG_VTUNE_DIR 00:08:35.696 #define SPDK_CONFIG_WERROR 1 00:08:35.696 #define SPDK_CONFIG_WPDK_DIR 00:08:35.696 #undef SPDK_CONFIG_XNVME 00:08:35.696 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:35.696 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:35.697 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 1432430 ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 1432430 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1707 -- # set_test_storage 2147483648 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.nxjJWj 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.nxjJWj/tests/vfio /tmp/spdk.nxjJWj 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=87337697280 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508576768 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7170879488 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895826944 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5890048 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253942272 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254290432 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=348160 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:35.698 * Looking for test storage... 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=87337697280 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9385472000 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:35.698 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1709 -- # set -o errtrace 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # shopt -s extdebug 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1713 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1714 -- # true 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1716 -- # xtrace_fd 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:35.698 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:35.699 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:35.699 14:37:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:35.699 [2024-07-12 14:37:12.470242] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:35.699 [2024-07-12 14:37:12.470334] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432583 ] 00:08:35.958 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.958 [2024-07-12 14:37:12.560802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.958 [2024-07-12 14:37:12.640216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.216 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.216 INFO: Seed: 3059678719 00:08:36.216 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:08:36.216 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:08:36.216 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:36.216 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.216 #2 INITED exec/s: 0 rss: 66Mb 00:08:36.216 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.216 This may also happen if the target rejected all inputs we tried so far 00:08:36.216 [2024-07-12 14:37:12.895368] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:36.730 NEW_FUNC[1/658]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:36.730 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:36.730 #10 NEW cov: 10960 ft: 10753 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:08:36.987 #11 NEW cov: 10974 ft: 14271 corp: 3/13b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:37.245 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:37.245 #17 NEW cov: 10991 ft: 14759 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:08:37.245 #28 NEW cov: 10994 ft: 15949 corp: 5/25b lim: 6 exec/s: 28 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:08:37.503 #31 NEW cov: 10994 ft: 17052 corp: 6/31b lim: 6 exec/s: 31 rss: 74Mb L: 6/6 MS: 3 EraseBytes-InsertByte-CrossOver- 00:08:37.761 #32 NEW cov: 10994 ft: 17362 corp: 7/37b lim: 6 exec/s: 32 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:38.049 #33 NEW cov: 10994 ft: 17496 corp: 8/43b lim: 6 exec/s: 33 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:08:38.049 #39 NEW cov: 11001 ft: 17711 corp: 9/49b lim: 6 exec/s: 39 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:08:38.307 #47 NEW cov: 11001 ft: 17878 corp: 10/55b lim: 6 exec/s: 23 rss: 75Mb L: 6/6 MS: 3 CrossOver-InsertRepeatedBytes-InsertByte- 00:08:38.307 #47 DONE cov: 11001 ft: 17878 corp: 10/55b lim: 6 exec/s: 23 rss: 75Mb 00:08:38.307 Done 47 runs in 2 second(s) 00:08:38.307 [2024-07-12 14:37:14.993722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:38.566 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:38.566 14:37:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:38.566 [2024-07-12 14:37:15.296934] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:38.566 [2024-07-12 14:37:15.297023] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432990 ] 00:08:38.566 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.825 [2024-07-12 14:37:15.385548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.825 [2024-07-12 14:37:15.468581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.083 INFO: Running with entropic power schedule (0xFF, 100). 00:08:39.083 INFO: Seed: 1596712870 00:08:39.083 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:08:39.083 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:08:39.083 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:39.083 INFO: A corpus is not provided, starting from an empty corpus 00:08:39.083 #2 INITED exec/s: 0 rss: 66Mb 00:08:39.083 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:39.083 This may also happen if the target rejected all inputs we tried so far 00:08:39.083 [2024-07-12 14:37:15.731053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:39.083 [2024-07-12 14:37:15.808367] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:39.083 [2024-07-12 14:37:15.808393] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:39.083 [2024-07-12 14:37:15.808428] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:39.599 NEW_FUNC[1/660]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:39.599 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:39.599 #32 NEW cov: 10956 ft: 10830 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 5 CrossOver-CopyPart-ChangeBinInt-InsertByte-CrossOver- 00:08:39.599 [2024-07-12 14:37:16.313650] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:39.599 [2024-07-12 14:37:16.313688] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:39.599 [2024-07-12 14:37:16.313722] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:39.858 #38 NEW cov: 10970 ft: 14029 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:39.858 [2024-07-12 14:37:16.513541] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:39.858 [2024-07-12 14:37:16.513567] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:39.858 [2024-07-12 14:37:16.513587] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:39.858 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:39.858 #41 NEW cov: 10990 ft: 14440 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 3 ChangeByte-CrossOver-CopyPart- 00:08:40.115 [2024-07-12 14:37:16.723667] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:40.115 [2024-07-12 14:37:16.723693] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:40.115 [2024-07-12 14:37:16.723711] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:40.115 #57 NEW cov: 10990 ft: 15836 corp: 5/17b lim: 4 exec/s: 57 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:08:40.373 [2024-07-12 14:37:16.913380] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:40.373 [2024-07-12 14:37:16.913403] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:40.373 [2024-07-12 14:37:16.913421] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:40.374 #58 NEW cov: 10990 ft: 16656 corp: 6/21b lim: 4 exec/s: 58 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:40.374 [2024-07-12 14:37:17.112022] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:40.374 [2024-07-12 14:37:17.112044] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:40.374 [2024-07-12 14:37:17.112078] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:40.632 #64 NEW cov: 10990 ft: 16860 corp: 7/25b lim: 4 exec/s: 64 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:08:40.632 [2024-07-12 14:37:17.311206] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:40.632 [2024-07-12 14:37:17.311229] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:40.632 [2024-07-12 14:37:17.311247] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:40.889 #67 NEW cov: 10990 ft: 17107 corp: 8/29b lim: 4 exec/s: 67 rss: 74Mb L: 4/4 MS: 3 InsertByte-CrossOver-CrossOver- 00:08:40.889 [2024-07-12 14:37:17.509004] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:40.889 [2024-07-12 14:37:17.509029] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:40.890 [2024-07-12 14:37:17.509050] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:40.890 #73 NEW cov: 10997 ft: 17288 corp: 9/33b lim: 4 exec/s: 73 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:41.147 [2024-07-12 14:37:17.715797] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:41.147 [2024-07-12 14:37:17.715824] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:41.147 [2024-07-12 14:37:17.715858] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:41.147 #74 NEW cov: 10997 ft: 18581 corp: 10/37b lim: 4 exec/s: 37 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:08:41.147 #74 DONE cov: 10997 ft: 18581 corp: 10/37b lim: 4 exec/s: 37 rss: 74Mb 00:08:41.147 Done 74 runs in 2 second(s) 00:08:41.147 [2024-07-12 14:37:17.852738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:41.405 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:41.405 14:37:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:41.405 [2024-07-12 14:37:18.163792] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:41.405 [2024-07-12 14:37:18.163857] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433350 ] 00:08:41.662 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.662 [2024-07-12 14:37:18.234562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.662 [2024-07-12 14:37:18.318630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.918 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.918 INFO: Seed: 148754367 00:08:41.918 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:08:41.918 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:08:41.918 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:41.918 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.918 #2 INITED exec/s: 0 rss: 66Mb 00:08:41.918 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.918 This may also happen if the target rejected all inputs we tried so far 00:08:41.918 [2024-07-12 14:37:18.576045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:41.918 [2024-07-12 14:37:18.656937] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:42.432 NEW_FUNC[1/659]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:42.432 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:42.432 #3 NEW cov: 10939 ft: 10545 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:08:42.432 [2024-07-12 14:37:19.163608] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:42.689 #9 NEW cov: 10956 ft: 13998 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CrossOver- 00:08:42.689 [2024-07-12 14:37:19.355170] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:42.689 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:42.689 #10 NEW cov: 10973 ft: 15667 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:42.946 [2024-07-12 14:37:19.559218] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:42.946 #21 NEW cov: 10973 ft: 15961 corp: 5/33b lim: 8 exec/s: 21 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:08:43.202 [2024-07-12 14:37:19.752367] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:43.202 #22 NEW cov: 10973 ft: 16371 corp: 6/41b lim: 8 exec/s: 22 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:08:43.202 [2024-07-12 14:37:19.941126] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:43.457 #23 NEW cov: 10973 ft: 16733 corp: 7/49b lim: 8 exec/s: 23 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:43.457 [2024-07-12 14:37:20.129774] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:43.457 #24 NEW cov: 10973 ft: 17152 corp: 8/57b lim: 8 exec/s: 24 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:43.714 [2024-07-12 14:37:20.320863] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:43.714 #25 NEW cov: 10980 ft: 17324 corp: 9/65b lim: 8 exec/s: 25 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:43.971 [2024-07-12 14:37:20.520803] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:43.971 #26 NEW cov: 10980 ft: 17450 corp: 10/73b lim: 8 exec/s: 13 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:08:43.971 #26 DONE cov: 10980 ft: 17450 corp: 10/73b lim: 8 exec/s: 13 rss: 74Mb 00:08:43.971 Done 26 runs in 2 second(s) 00:08:43.971 [2024-07-12 14:37:20.657725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:44.229 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:44.229 14:37:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:44.229 [2024-07-12 14:37:20.971467] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:44.229 [2024-07-12 14:37:20.971550] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433713 ] 00:08:44.229 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.487 [2024-07-12 14:37:21.062941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.487 [2024-07-12 14:37:21.145651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.746 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.746 INFO: Seed: 2979746694 00:08:44.746 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:08:44.746 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:08:44.746 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:44.746 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.746 #2 INITED exec/s: 0 rss: 66Mb 00:08:44.746 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.746 This may also happen if the target rejected all inputs we tried so far 00:08:44.746 [2024-07-12 14:37:21.412944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:45.262 NEW_FUNC[1/659]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:45.262 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:45.262 #5 NEW cov: 10946 ft: 10767 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:08:45.519 #6 NEW cov: 10960 ft: 14109 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:08:45.519 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:45.519 #12 NEW cov: 10977 ft: 15432 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:45.777 #28 NEW cov: 10977 ft: 16020 corp: 5/129b lim: 32 exec/s: 28 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:46.033 #29 NEW cov: 10977 ft: 16620 corp: 6/161b lim: 32 exec/s: 29 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:46.033 #30 NEW cov: 10977 ft: 16676 corp: 7/193b lim: 32 exec/s: 30 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:46.291 #41 NEW cov: 10977 ft: 16847 corp: 8/225b lim: 32 exec/s: 41 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:46.548 #42 NEW cov: 10984 ft: 16905 corp: 9/257b lim: 32 exec/s: 42 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:46.806 #44 NEW cov: 10984 ft: 17112 corp: 10/289b lim: 32 exec/s: 44 rss: 74Mb L: 32/32 MS: 2 EraseBytes-CopyPart- 00:08:46.806 #45 NEW cov: 10984 ft: 17606 corp: 11/321b lim: 32 exec/s: 22 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:46.806 #45 DONE cov: 10984 ft: 17606 corp: 11/321b lim: 32 exec/s: 22 rss: 74Mb 00:08:46.806 Done 45 runs in 2 second(s) 00:08:46.806 [2024-07-12 14:37:23.559741] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:47.064 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:47.064 14:37:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:47.323 [2024-07-12 14:37:23.876762] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:47.323 [2024-07-12 14:37:23.876835] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434076 ] 00:08:47.323 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.323 [2024-07-12 14:37:23.966698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.323 [2024-07-12 14:37:24.049737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.580 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.580 INFO: Seed: 1583808161 00:08:47.580 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:08:47.580 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:08:47.580 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:47.580 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.580 #2 INITED exec/s: 0 rss: 66Mb 00:08:47.580 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.580 This may also happen if the target rejected all inputs we tried so far 00:08:47.580 [2024-07-12 14:37:24.308274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:47.838 [2024-07-12 14:37:24.391643] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa, 0xa) fd=323 offset=0x8a00000000000000 prot=0x3: Invalid argument 00:08:47.838 [2024-07-12 14:37:24.391670] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa, 0xa) offset=0x8a00000000000000 flags=0x3: Invalid argument 00:08:47.838 [2024-07-12 14:37:24.391681] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:47.838 [2024-07-12 14:37:24.391701] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:47.838 [2024-07-12 14:37:24.392639] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa, 0xa) flags=0: No such file or directory 00:08:47.838 [2024-07-12 14:37:24.392659] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:47.838 [2024-07-12 14:37:24.392676] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:48.095 NEW_FUNC[1/660]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:48.095 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:48.095 #122 NEW cov: 10959 ft: 10929 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 ChangeBit-InsertByte-InsertRepeatedBytes-CrossOver-CopyPart- 00:08:48.353 #123 NEW cov: 10979 ft: 13872 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:08:48.353 [2024-07-12 14:37:25.097076] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa, 0x14) fd=325 offset=0x8a00000000000000 prot=0x3: Value too large for defined data type 00:08:48.353 [2024-07-12 14:37:25.097119] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa, 0x14) offset=0x8a00000000000000 flags=0x3: Value too large for defined data type 00:08:48.353 [2024-07-12 14:37:25.097130] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Value too large for defined data type 00:08:48.353 [2024-07-12 14:37:25.097146] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:48.353 [2024-07-12 14:37:25.098089] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa, 0x14) flags=0: No such file or directory 00:08:48.353 [2024-07-12 14:37:25.098108] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:48.353 [2024-07-12 14:37:25.098125] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:48.612 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:48.612 #124 NEW cov: 10996 ft: 14933 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:48.612 [2024-07-12 14:37:25.302146] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa, 0x14) fd=325 offset=0x8a000000000a0000 prot=0x3: Value too large for defined data type 00:08:48.612 [2024-07-12 14:37:25.302169] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa, 0x14) offset=0x8a000000000a0000 flags=0x3: Value too large for defined data type 00:08:48.612 [2024-07-12 14:37:25.302180] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Value too large for defined data type 00:08:48.612 [2024-07-12 14:37:25.302197] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:48.612 [2024-07-12 14:37:25.303147] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa, 0x14) flags=0: No such file or directory 00:08:48.612 [2024-07-12 14:37:25.303167] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:48.612 [2024-07-12 14:37:25.303182] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:48.869 #130 NEW cov: 10996 ft: 15560 corp: 5/129b lim: 32 exec/s: 130 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:48.869 #136 NEW cov: 10996 ft: 15964 corp: 6/161b lim: 32 exec/s: 136 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:49.127 [2024-07-12 14:37:25.704342] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa, 0xa000a) fd=325 offset=0x8a00000000000000 prot=0x3: Value too large for defined data type 00:08:49.127 [2024-07-12 14:37:25.704368] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa, 0xa000a) offset=0x8a00000000000000 flags=0x3: Value too large for defined data type 00:08:49.127 [2024-07-12 14:37:25.704379] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Value too large for defined data type 00:08:49.127 [2024-07-12 14:37:25.704397] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:49.127 [2024-07-12 14:37:25.705340] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa, 0xa000a) flags=0: No such file or directory 00:08:49.127 [2024-07-12 14:37:25.705361] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:49.127 [2024-07-12 14:37:25.705378] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:49.127 #137 NEW cov: 10996 ft: 16156 corp: 7/193b lim: 32 exec/s: 137 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:49.127 [2024-07-12 14:37:25.910220] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x400000000000000a, 0x4000000000000014) fd=325 offset=0x8a000000000a0000 prot=0x3: Value too large for defined data type 00:08:49.127 [2024-07-12 14:37:25.910245] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x400000000000000a, 0x4000000000000014) offset=0x8a000000000a0000 flags=0x3: Value too large for defined data type 00:08:49.127 [2024-07-12 14:37:25.910260] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Value too large for defined data type 00:08:49.127 [2024-07-12 14:37:25.910276] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:49.127 [2024-07-12 14:37:25.911266] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x400000000000000a, 0x4000000000000014) flags=0: No such file or directory 00:08:49.127 [2024-07-12 14:37:25.911287] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:49.127 [2024-07-12 14:37:25.911305] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:49.384 #138 NEW cov: 10996 ft: 16639 corp: 8/225b lim: 32 exec/s: 138 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:49.384 [2024-07-12 14:37:26.104457] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa, 0xa0000000a) fd=325 offset=0x8a00000000000000 prot=0x3: Value too large for defined data type 00:08:49.384 [2024-07-12 14:37:26.104481] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa, 0xa0000000a) offset=0x8a00000000000000 flags=0x3: Value too large for defined data type 00:08:49.384 [2024-07-12 14:37:26.104491] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Value too large for defined data type 00:08:49.384 [2024-07-12 14:37:26.104507] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:49.384 [2024-07-12 14:37:26.105476] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa, 0xa0000000a) flags=0: No such file or directory 00:08:49.384 [2024-07-12 14:37:26.105495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:49.384 [2024-07-12 14:37:26.105511] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:49.642 #139 NEW cov: 11003 ft: 16704 corp: 9/257b lim: 32 exec/s: 139 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:08:49.642 [2024-07-12 14:37:26.297930] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0xa, 0xa0000000a) fd=325 offset=0x8a00000000000000 prot=0x3: Value too large for defined data type 00:08:49.642 [2024-07-12 14:37:26.297953] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xa, 0xa0000000a) offset=0x8a00000000000000 flags=0x3: Value too large for defined data type 00:08:49.642 [2024-07-12 14:37:26.297963] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Value too large for defined data type 00:08:49.642 [2024-07-12 14:37:26.297995] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:49.642 [2024-07-12 14:37:26.298952] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xa, 0xa0000000a) flags=0: No such file or directory 00:08:49.642 [2024-07-12 14:37:26.298971] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:49.642 [2024-07-12 14:37:26.298986] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:49.642 #145 NEW cov: 11003 ft: 16790 corp: 10/289b lim: 32 exec/s: 72 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:49.642 #145 DONE cov: 11003 ft: 16790 corp: 10/289b lim: 32 exec/s: 72 rss: 74Mb 00:08:49.642 Done 145 runs in 2 second(s) 00:08:49.900 [2024-07-12 14:37:26.434750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:50.158 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:50.158 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:50.159 14:37:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:50.159 [2024-07-12 14:37:26.755827] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:50.159 [2024-07-12 14:37:26.755912] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434436 ] 00:08:50.159 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.159 [2024-07-12 14:37:26.846780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.159 [2024-07-12 14:37:26.930592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.417 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.417 INFO: Seed: 157812232 00:08:50.417 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:08:50.417 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:08:50.417 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:50.417 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.417 #2 INITED exec/s: 0 rss: 66Mb 00:08:50.417 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.417 This may also happen if the target rejected all inputs we tried so far 00:08:50.417 [2024-07-12 14:37:27.174383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:50.675 [2024-07-12 14:37:27.247264] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:50.675 [2024-07-12 14:37:27.247305] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:50.933 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:50.933 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:50.933 #64 NEW cov: 10957 ft: 10786 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 2 InsertRepeatedBytes-CopyPart- 00:08:51.190 [2024-07-12 14:37:27.756185] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:51.190 [2024-07-12 14:37:27.756242] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:51.190 #79 NEW cov: 10975 ft: 14194 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 5 ChangeBit-ShuffleBytes-ChangeByte-InsertRepeatedBytes-CrossOver- 00:08:51.190 [2024-07-12 14:37:27.960366] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:51.190 [2024-07-12 14:37:27.960397] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:51.447 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:51.447 #85 NEW cov: 10992 ft: 15829 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:51.447 [2024-07-12 14:37:28.172934] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:51.447 [2024-07-12 14:37:28.172966] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:51.704 #88 NEW cov: 10992 ft: 16516 corp: 5/53b lim: 13 exec/s: 88 rss: 74Mb L: 13/13 MS: 3 CrossOver-InsertByte-InsertRepeatedBytes- 00:08:51.704 [2024-07-12 14:37:28.364230] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:51.704 [2024-07-12 14:37:28.364261] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:51.704 #89 NEW cov: 10992 ft: 17285 corp: 6/66b lim: 13 exec/s: 89 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:51.961 [2024-07-12 14:37:28.560063] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:51.961 [2024-07-12 14:37:28.560095] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:51.961 #110 NEW cov: 10992 ft: 17455 corp: 7/79b lim: 13 exec/s: 110 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:08:52.219 [2024-07-12 14:37:28.756765] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:52.219 [2024-07-12 14:37:28.756799] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:52.219 #111 NEW cov: 10992 ft: 17605 corp: 8/92b lim: 13 exec/s: 111 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:52.219 [2024-07-12 14:37:28.954307] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:52.219 [2024-07-12 14:37:28.954339] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:52.476 #112 NEW cov: 10999 ft: 17772 corp: 9/105b lim: 13 exec/s: 112 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:08:52.476 [2024-07-12 14:37:29.152505] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:52.476 [2024-07-12 14:37:29.152545] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:52.734 #113 NEW cov: 10999 ft: 17808 corp: 10/118b lim: 13 exec/s: 56 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:08:52.734 #113 DONE cov: 10999 ft: 17808 corp: 10/118b lim: 13 exec/s: 56 rss: 75Mb 00:08:52.734 Done 113 runs in 2 second(s) 00:08:52.734 [2024-07-12 14:37:29.290732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:52.993 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:52.993 14:37:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:52.993 [2024-07-12 14:37:29.607239] Starting SPDK v24.09-pre git sha1 2a2ade677 / DPDK 24.03.0 initialization... 00:08:52.993 [2024-07-12 14:37:29.607321] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434797 ] 00:08:52.993 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.993 [2024-07-12 14:37:29.700196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.251 [2024-07-12 14:37:29.781412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.251 INFO: Running with entropic power schedule (0xFF, 100). 00:08:53.251 INFO: Seed: 3009835009 00:08:53.251 INFO: Loaded 1 modules (355049 inline 8-bit counters): 355049 [0x296c90c, 0x29c33f5), 00:08:53.251 INFO: Loaded 1 PC tables (355049 PCs): 355049 [0x29c33f8,0x2f2e288), 00:08:53.251 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:53.251 INFO: A corpus is not provided, starting from an empty corpus 00:08:53.251 #2 INITED exec/s: 0 rss: 67Mb 00:08:53.251 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:53.251 This may also happen if the target rejected all inputs we tried so far 00:08:53.251 [2024-07-12 14:37:30.026785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:53.510 [2024-07-12 14:37:30.111637] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:53.510 [2024-07-12 14:37:30.111685] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:53.768 NEW_FUNC[1/658]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:53.768 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:53.768 #16 NEW cov: 10916 ft: 10925 corp: 2/10b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 4 ChangeByte-InsertRepeatedBytes-CrossOver-CopyPart- 00:08:54.046 [2024-07-12 14:37:30.624992] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.046 [2024-07-12 14:37:30.625041] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.046 NEW_FUNC[1/2]: 0x1412c50 in handle_cmd_req /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:5564 00:08:54.046 NEW_FUNC[2/2]: 0x143ce80 in handle_sq_tdbl_write /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:2551 00:08:54.046 #27 NEW cov: 10967 ft: 14063 corp: 3/19b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:08:54.320 [2024-07-12 14:37:30.831559] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.320 [2024-07-12 14:37:30.831596] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.320 NEW_FUNC[1/1]: 0x1a4a600 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:54.320 #28 NEW cov: 10984 ft: 14822 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:54.320 [2024-07-12 14:37:31.031223] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.320 [2024-07-12 14:37:31.031258] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.577 #29 NEW cov: 10984 ft: 15759 corp: 5/37b lim: 9 exec/s: 29 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:54.577 [2024-07-12 14:37:31.233680] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.577 [2024-07-12 14:37:31.233716] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.577 #35 NEW cov: 10984 ft: 16210 corp: 6/46b lim: 9 exec/s: 35 rss: 75Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:54.835 [2024-07-12 14:37:31.430869] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:54.835 [2024-07-12 14:37:31.430904] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:54.835 #38 NEW cov: 10984 ft: 16731 corp: 7/55b lim: 9 exec/s: 38 rss: 75Mb L: 9/9 MS: 3 EraseBytes-ChangeBit-InsertByte- 00:08:55.092 [2024-07-12 14:37:31.627094] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.092 [2024-07-12 14:37:31.627125] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:55.092 #40 NEW cov: 10984 ft: 16838 corp: 8/64b lim: 9 exec/s: 40 rss: 75Mb L: 9/9 MS: 2 EraseBytes-CrossOver- 00:08:55.092 [2024-07-12 14:37:31.827270] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.092 [2024-07-12 14:37:31.827303] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:55.349 #46 NEW cov: 10991 ft: 17007 corp: 9/73b lim: 9 exec/s: 46 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:55.349 [2024-07-12 14:37:32.028279] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.349 [2024-07-12 14:37:32.028309] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:55.608 #47 NEW cov: 10991 ft: 17030 corp: 10/82b lim: 9 exec/s: 23 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:08:55.608 #47 DONE cov: 10991 ft: 17030 corp: 10/82b lim: 9 exec/s: 23 rss: 75Mb 00:08:55.608 Done 47 runs in 2 second(s) 00:08:55.608 [2024-07-12 14:37:32.167727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:55.867 14:37:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:55.867 14:37:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:55.867 14:37:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.867 14:37:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:55.867 00:08:55.867 real 0m20.307s 00:08:55.867 user 0m28.374s 00:08:55.867 sys 0m2.080s 00:08:55.867 14:37:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.867 14:37:32 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:55.867 ************************************ 00:08:55.867 END TEST vfio_llvm_fuzz 00:08:55.867 ************************************ 00:08:55.867 14:37:32 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:08:55.867 14:37:32 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:55.867 00:08:55.867 real 1m26.454s 00:08:55.867 user 2m9.674s 00:08:55.867 sys 0m10.230s 00:08:55.867 14:37:32 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.867 14:37:32 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:55.867 ************************************ 00:08:55.867 END TEST llvm_fuzz 00:08:55.867 ************************************ 00:08:55.867 14:37:32 -- common/autotest_common.sh@1142 -- # return 0 00:08:55.867 14:37:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:08:55.867 14:37:32 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:08:55.867 14:37:32 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:08:55.867 14:37:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:55.867 14:37:32 -- common/autotest_common.sh@10 -- # set +x 00:08:55.867 14:37:32 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:08:55.867 14:37:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:08:55.867 14:37:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:08:55.867 14:37:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.139 INFO: APP EXITING 00:09:01.139 INFO: killing all VMs 00:09:01.139 INFO: killing vhost app 00:09:01.139 INFO: EXIT DONE 00:09:03.674 Waiting for block devices as requested 00:09:03.934 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:09:03.934 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:03.934 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:04.193 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:04.193 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:04.193 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:04.452 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:04.452 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:04.452 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:04.710 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:04.710 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:04.710 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:04.969 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:04.969 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:05.227 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:05.227 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:05.227 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:11.788 Cleaning 00:09:11.788 Removing: /dev/shm/spdk_tgt_trace.pid1406526 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1404216 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1405338 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1406526 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1407080 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1407849 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1408077 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1408886 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1408905 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1409224 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1409518 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1409844 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1410095 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1410338 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1410533 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1410725 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1410959 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1411702 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1414059 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1414422 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1414634 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1414802 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1415196 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1415360 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1415759 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1415889 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1416149 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1416219 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1416374 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1416551 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1416994 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1417187 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1417388 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1417486 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1417759 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1417862 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1417933 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1418167 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1418412 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1418672 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1418871 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1419069 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1419269 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1419460 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1419662 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1419853 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1420057 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1420252 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1420460 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1420708 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1420949 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1421192 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1421392 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1421595 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1421792 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1421999 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1422290 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1422543 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1422797 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1423737 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1424113 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1424473 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1424835 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1425191 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1425545 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1425859 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1426175 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1426476 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1426824 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1427183 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1427539 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1427898 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1428254 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1428613 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1428975 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1429331 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1429690 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1430046 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1430405 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1430722 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1431040 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1431341 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1431687 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1432037 00:09:11.788 Removing: /var/run/dpdk/spdk_pid1432583 00:09:11.789 Removing: /var/run/dpdk/spdk_pid1432990 00:09:11.789 Removing: /var/run/dpdk/spdk_pid1433350 00:09:11.789 Removing: /var/run/dpdk/spdk_pid1433713 00:09:11.789 Removing: /var/run/dpdk/spdk_pid1434076 00:09:11.789 Removing: /var/run/dpdk/spdk_pid1434436 00:09:11.789 Removing: /var/run/dpdk/spdk_pid1434797 00:09:11.789 Clean 00:09:11.789 14:37:47 -- common/autotest_common.sh@1451 -- # return 0 00:09:11.789 14:37:47 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:09:11.789 14:37:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.789 14:37:47 -- common/autotest_common.sh@10 -- # set +x 00:09:11.789 14:37:47 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:09:11.789 14:37:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.789 14:37:47 -- common/autotest_common.sh@10 -- # set +x 00:09:11.789 14:37:48 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:11.789 14:37:48 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:11.789 14:37:48 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:11.789 14:37:48 -- spdk/autotest.sh@391 -- # hash lcov 00:09:11.789 14:37:48 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:09:11.789 14:37:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:11.789 14:37:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:11.789 14:37:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.789 14:37:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.789 14:37:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.789 14:37:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.789 14:37:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.789 14:37:48 -- paths/export.sh@5 -- $ export PATH 00:09:11.789 14:37:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.789 14:37:48 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:11.789 14:37:48 -- common/autobuild_common.sh@444 -- $ date +%s 00:09:11.789 14:37:48 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720787868.XXXXXX 00:09:11.789 14:37:48 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720787868.hKYdNC 00:09:11.789 14:37:48 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:09:11.789 14:37:48 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:09:11.789 14:37:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:11.789 14:37:48 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:11.789 14:37:48 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:11.789 14:37:48 -- common/autobuild_common.sh@460 -- $ get_config_params 00:09:11.789 14:37:48 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:09:11.789 14:37:48 -- common/autotest_common.sh@10 -- $ set +x 00:09:11.789 14:37:48 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:11.789 14:37:48 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:09:11.789 14:37:48 -- pm/common@17 -- $ local monitor 00:09:11.789 14:37:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:11.789 14:37:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:11.789 14:37:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:11.789 14:37:48 -- pm/common@21 -- $ date +%s 00:09:11.789 14:37:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:11.789 14:37:48 -- pm/common@21 -- $ date +%s 00:09:11.789 14:37:48 -- pm/common@25 -- $ sleep 1 00:09:11.789 14:37:48 -- pm/common@21 -- $ date +%s 00:09:11.789 14:37:48 -- pm/common@21 -- $ date +%s 00:09:11.789 14:37:48 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787868 00:09:11.789 14:37:48 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787868 00:09:11.789 14:37:48 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787868 00:09:11.789 14:37:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787868 00:09:11.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787868_collect-vmstat.pm.log 00:09:11.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787868_collect-cpu-load.pm.log 00:09:11.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787868_collect-cpu-temp.pm.log 00:09:11.789 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787868_collect-bmc-pm.bmc.pm.log 00:09:12.723 14:37:49 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:09:12.723 14:37:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:09:12.723 14:37:49 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:12.723 14:37:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:12.723 14:37:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:12.723 14:37:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:12.723 14:37:49 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:12.723 14:37:49 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:12.723 14:37:49 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:12.723 14:37:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:12.723 14:37:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:09:12.723 14:37:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:12.723 14:37:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:12.723 14:37:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:12.723 14:37:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:12.723 14:37:49 -- pm/common@44 -- $ pid=1440764 00:09:12.723 14:37:49 -- pm/common@50 -- $ kill -TERM 1440764 00:09:12.723 14:37:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:12.723 14:37:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:12.723 14:37:49 -- pm/common@44 -- $ pid=1440766 00:09:12.723 14:37:49 -- pm/common@50 -- $ kill -TERM 1440766 00:09:12.723 14:37:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:12.723 14:37:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:12.723 14:37:49 -- pm/common@44 -- $ pid=1440769 00:09:12.723 14:37:49 -- pm/common@50 -- $ kill -TERM 1440769 00:09:12.723 14:37:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:12.723 14:37:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:12.723 14:37:49 -- pm/common@44 -- $ pid=1440803 00:09:12.723 14:37:49 -- pm/common@50 -- $ sudo -E kill -TERM 1440803 00:09:12.723 + [[ -n 1298527 ]] 00:09:12.723 + sudo kill 1298527 00:09:12.732 [Pipeline] } 00:09:12.752 [Pipeline] // stage 00:09:12.758 [Pipeline] } 00:09:12.776 [Pipeline] // timeout 00:09:12.783 [Pipeline] } 00:09:12.802 [Pipeline] // catchError 00:09:12.807 [Pipeline] } 00:09:12.824 [Pipeline] // wrap 00:09:12.831 [Pipeline] } 00:09:12.845 [Pipeline] // catchError 00:09:12.856 [Pipeline] stage 00:09:12.859 [Pipeline] { (Epilogue) 00:09:12.875 [Pipeline] catchError 00:09:12.877 [Pipeline] { 00:09:12.892 [Pipeline] echo 00:09:12.894 Cleanup processes 00:09:12.900 [Pipeline] sh 00:09:13.184 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:13.184 1357236 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720787452 00:09:13.184 1357269 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720787452 00:09:13.184 1440925 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:09:13.184 1441650 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:13.199 [Pipeline] sh 00:09:13.485 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:13.485 ++ grep -v 'sudo pgrep' 00:09:13.485 ++ awk '{print $1}' 00:09:13.485 + sudo kill -9 1440925 00:09:13.498 [Pipeline] sh 00:09:13.782 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:14.730 [Pipeline] sh 00:09:15.070 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:15.070 Artifacts sizes are good 00:09:15.085 [Pipeline] archiveArtifacts 00:09:15.093 Archiving artifacts 00:09:15.174 [Pipeline] sh 00:09:15.458 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:15.473 [Pipeline] cleanWs 00:09:15.482 [WS-CLEANUP] Deleting project workspace... 00:09:15.482 [WS-CLEANUP] Deferred wipeout is used... 00:09:15.488 [WS-CLEANUP] done 00:09:15.490 [Pipeline] } 00:09:15.509 [Pipeline] // catchError 00:09:15.523 [Pipeline] sh 00:09:15.805 + logger -p user.info -t JENKINS-CI 00:09:15.814 [Pipeline] } 00:09:15.830 [Pipeline] // stage 00:09:15.837 [Pipeline] } 00:09:15.853 [Pipeline] // node 00:09:15.859 [Pipeline] End of Pipeline 00:09:15.892 Finished: SUCCESS